Science.gov

Sample records for accuracy selectivity robustness

  1. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  2. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  3. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  4. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks

    PubMed Central

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change. PMID:26913048

  5. Robust alignment of prostate histology slices with quantified accuracy

    NASA Astrophysics Data System (ADS)

    Hughes, Cecilia; Rouviere, Olivier; Mege Lechevallier, Florence; Souchon, Rémi; Prost, Rémy

    2012-02-01

    Prostate cancer is the most common malignancy among men yet no current imaging technique is capable of detecting the tumours with precision. To evaluate each technique, the histology data must be precisely mapped to the imaged data. As it cannot be assumed that the histology slices are cut along the same plane as the imaged data is acquired, the registration is a 3D problem. This requires the prior accurate alignment of the histology slices. We propose a protocol to create in a rapid and standardised manner internal fiducial markers in fresh prostate specimens and an algorithm by which these markers can then be automatically detected and classified enabling the automatic rigid alignment of each slice. The protocol and algorithm were tested on 10 prostate specimens, with 19.2 histology slices on average per specimen. On average 90.9% of the fiducial markers created were visible in the slices, of which 96.1% were automatically correctly detected and classified. The average accuracy of the alignment was 0.19 +/- 0.15 mm at the fiducial markers. The algorithm took 5.46 min on average per specimen. The proposed protocol and algorithm were also tested using simulated images and a beef liver sample. The simulated images showed that the algorithm has no associated residual error and justified the choice of a rigid registration. In the beef liver images, the average accuracy of the alignment was 0.11 +/- 0.09 mm at the fiducial markers and 0.63 +/- 0.47 mm at a validation marker approximately 20 mm from the fiducial markers.

  6. Robust quantification of orientation selectivity and direction selectivity

    PubMed Central

    Mazurek, Mark; Kager, Marisa; Van Hooser, Stephen D.

    2014-01-01

    Neurons in the visual cortex of all examined mammals exhibit orientation or direction tuning. New imaging techniques are allowing the circuit mechanisms underlying orientation and direction selectivity to be studied with clarity that was not possible a decade ago. However, these new techniques bring new challenges: robust quantitative measurements are needed to evaluate the findings from these studies, which can involve thousands of cells of varying response strength. Here we show that traditional measures of selectivity such as the orientation index (OI) and direction index (DI) are poorly suited for quantitative evaluation of orientation and direction tuning. We explore several alternative methods for quantifying tuning and for addressing a variety of questions that arise in studies on orientation- and direction-tuned cells and cell populations. We provide recommendations for which methods are best suited to which applications and we offer tips for avoiding potential pitfalls in applying these methods. Our goal is to supply a solid quantitative foundation for studies involving orientation and direction tuning. PMID:25147504

  7. Integrative fitting of absorption line profiles with high accuracy, robustness, and speed

    NASA Astrophysics Data System (ADS)

    Skrotzki, Julian; Habig, Jan Christoph; Ebert, Volker

    2014-08-01

    The principle of the integrative evaluation of absorption line profiles relies on the numeric integration of absorption line signals to retrieve absorber concentrations, e.g., of trace gases. Thus, it is a fast and robust technique. However, previous implementations of the integrative evaluation principle showed shortcomings in terms of accuracy and the lack of a fit quality indicator. This has motivated the development of an advanced integrative (AI) fitting algorithm. The AI fitting algorithm retains the advantages of previous integrative implementations—robustness and speed—and is able to achieve high accuracy by introduction of a novel iterative fitting process. A comparison of the AI fitting algorithm with the widely used Levenberg-Marquardt (LM) fitting algorithm indicates that the AI algorithm has advantages in terms of robustness due to its independence from appropriately chosen start values for the initialization of the fitting process. In addition, the AI fitting algorithm shows speed advantages typically resulting in a factor of three to four shorter computational times on a standard personal computer. The LM algorithm on the other hand retains advantages in terms of a much higher flexibility, as the AI fitting algorithm is restricted to the evaluation of single absorption lines with precomputed line width. Comparing both fitting algorithms for the specific application of in situ laser hygrometry at 1,370 nm using direct tunable diode laser absorption spectroscopy (TDLAS) suggests that the accuracy of the AI algorithm is equivalent to that of the LM algorithm. For example, a signal-to-noise ratio of 80 and better typically yields a deviation of <1 % between both fitting algorithms. The properties of the AI fitting algorithm make it an interesting alternative if robustness and speed are crucial in an application and if the restriction to a single absorption line is possible. These conditions are fulfilled for the 1,370 nm TDLAS hygrometry at the

  8. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  9. Multiple ping sonar accuracy improvement using robust motion estimation and ping fusion.

    PubMed

    Yu, Lian; Neretti, Nicola; Intrator, Nathan

    2006-04-01

    Noise degrades the accuracy of sonar systems. We demonstrate a practical method for increasing the effective signal-to-noise ratio (SNR) by fusing time delay information from a burst of multiple sonar pings. This approach can be useful when there is no relative motion between the sonar and the target during the burst of sonar pinging. Otherwise, the relative motion degrades the fusion and therefore, has to be addressed before fusion can be used. In this paper, we present a robust motion estimation algorithm which uses information from multiple receivers to estimate the relative motion between pings in the burst. We then compensate for motion, and show that the fusion of information from the burst of motion compensated pings improves both the resilience to noise and sonar accuracy, consequently increasing the operating range of the sonar system.

  10. High-accuracy and robust localization of large control markers for geometric camera calibration.

    PubMed

    Douxchamps, Damien; Chihara, Kunihiro

    2009-02-01

    Accurate measurement of the position of features in an image is subject to a fundamental compromise: The features must be both small, to limit the effect of nonlinear distortions, and large, to limit the effect of noise and discretization. This constrains both the accuracy and the robustness of image measurements, which play an important role in geometric camera calibration as well as in all subsequent measurements based on that calibration. In this paper, we present a new geometric camera calibration technique that exploits the complete camera model during the localization of control markers, thereby abolishing the marker size compromise. Large markers allow a dense pattern to be used instead of a simple disc, resulting in a significant increase in accuracy and robustness. When highly planar markers are used, geometric camera calibration based on synthetic images leads to true errors of 0.002 pixels, even in the presence of artifacts such as noise, illumination gradients, compression, blurring, and limited dynamic range. The camera parameters are also accurately recovered, even for complex camera models.

  11. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  12. A selective review of robust variable selection with applications in bioinformatics

    PubMed Central

    Wu, Cen

    2015-01-01

    A drastic amount of data have been and are being generated in bioinformatics studies. In the analysis of such data, the standard modeling approaches can be challenged by the heavy-tailed errors and outliers in response variables, the contamination in predictors (which may be caused by, for instance, technical problems in microarray gene expression studies), model mis-specification and others. Robust methods are needed to tackle these challenges. When there are a large number of predictors, variable selection can be as important as estimation. As a generic variable selection and regularization tool, penalization has been extensively adopted. In this article, we provide a selective review of robust penalized variable selection approaches especially designed for high-dimensional data from bioinformatics and biomedical studies. We discuss the robust loss functions, penalty functions and computational algorithms. The theoretical properties and implementation are also briefly examined. Application examples of the robust penalization approaches in representative bioinformatics and biomedical studies are also illustrated. PMID:25479793

  13. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  14. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  15. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  16. The Signatures of Selection for Translational Accuracy in Plant Genes

    PubMed Central

    Porceddu, Andrea; Zenoni, Sara; Camiolo, Salvatore

    2013-01-01

    Little is known about the natural selection of synonymous codons within the coding sequences of plant genes. We analyzed the distribution of synonymous codons within plant coding sequences and found that preferred codons tend to encode the more conserved and functionally important residues of plant proteins. This was consistent among several synonymous codon families and applied to genes with different expression profiles and functions. Most of the randomly chosen alternative sets of codons scored weaker associations than the actual sets of preferred codons, suggesting that codon position within plant genes and codon usage bias have coevolved to maximize translational accuracy. All these findings are consistent with the mistranslation-induced protein misfolding theory, which predicts the natural selection of highly preferred codons more frequently at sites where translation errors could compromise protein folding or functionality. Our results will provide an important insight in future studies of protein folding, molecular evolution, and transgene design for optimal expression. PMID:23695187

  17. Robust nonlinear variable selective control for networked systems

    NASA Astrophysics Data System (ADS)

    Rahmani, Behrooz

    2016-10-01

    This paper is concerned with the networked control of a class of uncertain nonlinear systems. In this way, Takagi-Sugeno (T-S) fuzzy modelling is used to extend the previously proposed variable selective control (VSC) methodology to nonlinear systems. This extension is based upon the decomposition of the nonlinear system to a set of fuzzy-blended locally linearised subsystems and further application of the VSC methodology to each subsystem. To increase the applicability of the T-S approach for uncertain nonlinear networked control systems, this study considers the asynchronous premise variables in the plant and the controller, and then introduces a robust stability analysis and control synthesis. The resulting optimal switching-fuzzy controller provides a minimum guaranteed cost on an H2 performance index. Simulation studies on three nonlinear benchmark problems demonstrate the effectiveness of the proposed method.

  18. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  19. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  20. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  1. Selective Effect of Physical Fatigue on Motor Imagery Accuracy

    PubMed Central

    Di Rienzo, Franck; Collet, Christian; Hoyek, Nady; Guillot, Aymeric

    2012-01-01

    While the use of motor imagery (the mental representation of an action without overt execution) during actual training sessions is usually recommended, experimental studies examining the effect of physical fatigue on subsequent motor imagery performance are sparse and yielded divergent findings. Here, we investigated whether physical fatigue occurring during an intense sport training session affected motor imagery ability. Twelve swimmers (nine males, mean age 15.5 years) conducted a 45 min physically-fatiguing protocol where they swam from 70% to 100% of their maximal aerobic speed. We tested motor imagery ability immediately before and after fatigue state. Participants randomly imagined performing a swim turn using internal and external visual imagery. Self-reports ratings, imagery times and electrodermal responses, an index of alertness from the autonomic nervous system, were the dependent variables. Self-reports ratings indicated that participants did not encounter difficulty when performing motor imagery after fatigue. However, motor imagery times were significantly shortened during posttest compared to both pretest and actual turn times, thus indicating reduced timing accuracy. Looking at the selective effect of physical fatigue on external visual imagery did not reveal any difference before and after fatigue, whereas significantly shorter imagined times and electrodermal responses (respectively 15% and 48% decrease, p<0.001) were observed during the posttest for internal visual imagery. A significant correlation (r = 0.64; p<0.05) was observed between motor imagery vividness (estimated through imagery questionnaire) and autonomic responses during motor imagery after fatigue. These data support that unlike local muscle fatigue, physical fatigue occurring during intense sport training sessions is likely to affect motor imagery accuracy. These results might be explained by the updating of the internal representation of the motor sequence, due to temporary

  2. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  3. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed.

  4. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding

    PubMed Central

    Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  5. Analysis of the Accuracy and Robustness of the Leap Motion Controller

    PubMed Central

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-01-01

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678

  6. Analysis of the accuracy and robustness of the leap motion controller.

    PubMed

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-05-14

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction.

  7. A comprehensive meta-reanalysis of the robustness of the experience-accuracy effect in clinical judgment.

    PubMed

    Spengler, Paul M; Pilipis, Lois A

    2015-07-01

    Experience is one of the most commonly studied variables in clinical judgment research. In a meta-analysis of research from 1970 to 1996 of judgments made by 4,607 participants from 74 studies, Spengler, White, Ægisdóttir, Maugherman, Anderson, et al. (2009) found an experience-accuracy fixed effect of d = .121 (95% CI [.06, .18]), indicating that with more experience, counseling and other psychologists obtain only modest gains in decision-making accuracy. We sought to conduct a more rigorous assessment of the experience-accuracy effect by synthesizing 40 years of research from 1970 to 2010, assessing the same and additional moderators, including subgroup analyses of extremes of experience, and conducting a sensitivity analysis. The judgments formed by 11,584 clinicians from 113 studies resulted in a random effects d of .146 (95% CI [.08, .21]), reflecting the robustness of only a small impact of experience on decision-making accuracy. The sensitivity analysis revealed that the effect is consistent across analysis and methodological considerations. Mixed effects metaregression revealed no statistically significant relation between 40 years of time and the experience-accuracy effect. A cumulative meta-analysis indicated that the experience-accuracy effect stabilized in the literature in the year 1999, after the accumulation of 82 studies, with no appreciable change since. We assessed a broader range of experience comparing no experience to some experience and comparing nonexperts with experts, and for differences as a function of decision making based on psychological tests; however, these and most other moderators were not significant. Implications are discussed for clinical decision-making research, training, and practice.

  8. Accuracy and robustness of a simple algorithm to measure vessel diameter from B-mode ultrasound images.

    PubMed

    Hunt, Brian E; Flavin, Daniel C; Bauschatz, Emily; Whitney, Heather M

    2016-06-01

    Measurement of changes in arterial vessel diameter can be used to assess the state of cardiovascular health, but the use of such measurements as biomarkers is contingent upon the accuracy and robustness of the measurement. This work presents a simple algorithm for measuring diameter from B-mode images derived from vascular ultrasound. The algorithm is based upon Gaussian curve fitting and a Viterbi search process. We assessed the accuracy of the algorithm by measuring the diameter of a digital reference object (DRO) and ultrasound-derived images of a carotid artery. We also assessed the robustness of the algorithm by manipulating the quality of the image. Across a broad range of signal-to-noise ratio and with varying image edge error, the algorithm measured vessel diameter within 0.7% of the creation dimensions of the DRO. This was a similar level of difference (0.8%) to when an ultrasound image was used. When SNR dropped to 18 dB, measurement error increased to 1.3%. When edge position was varied by as much as 10%, measurement error was well maintained between 0.68 and 0.75%. All these errors fall well within the margin of error established by the medical physics community for quantitative ultrasound measurements. We conclude that this simple algorithm provides consistent and accurate measurement of lumen diameter from B-mode images across a broad range of image quality. PMID:27055985

  9. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism

    PubMed Central

    Zhong, Bineng; Zhang, Jun; Wang, Pengfei; Du, Jixiang; Chen, Duansheng

    2016-01-01

    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. PMID:27575684

  10. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.

    PubMed

    Zhong, Bineng; Zhang, Jun; Wang, Pengfei; Du, Jixiang; Chen, Duansheng

    2016-01-01

    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. PMID:27575684

  11. Robust population inversion by polarization selective pulsed excitation.

    PubMed

    Mantei, D; Förstner, J; Gordon, S; Leier, Y A; Rai, A K; Reuter, D; Wieck, A D; Zrenner, A

    2015-05-22

    The coherent state preparation and control of single quantum systems is an important prerequisite for the implementation of functional quantum devices. Prominent examples for such systems are semiconductor quantum dots, which exhibit a fine structure split single exciton state and a V-type three level structure, given by a common ground state and two distinguishable and separately excitable transitions. In this work we introduce a novel concept for the preparation of a robust inversion by the sequential excitation in a V-type system via distinguishable paths.

  12. Balancing accuracy and efficiency in selecting vibrational configuration interaction basis states using vibrational perturbation theory

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-08-01

    This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.

  13. Selecting fillers on emotional appearance improves lineup identification accuracy.

    PubMed

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  14. Robust tie points selection for InSAR image coregistration

    NASA Astrophysics Data System (ADS)

    Skanderi, Takieddine; Chabira, Boulerbah; Afifa, Belkacem; Belhadj Aissa, Aichouche

    2013-10-01

    Image coregistration is an important step in SAR interferometry which is a well known method for DEM generation and surface displacement monitoring. A practical and widely used automatic coregistration algorithm is based on selecting a number of tie points in the master image and looking for the correspondence of each point in the slave image using correlation technique. The characteristics of these points, their number and their distribution have a great impact on the reliability of the estimated transformation. In this work, we present a method for automatic selection of suitable tie points that are well distributed over the common area without decreasing the desired tie points' number. First we select candidate points using Harris operator. Then from these points we select tie points depending on their cornerness measure (the highest first). Once a tie point is selected, its correspondence is searched for in the slave image, if the similarity measure maximum is less than a given threshold or it is at the border of the search window, this point is discarded and we proceed to the next Harris point, else, the cornerness of the remaining candidates Harris points are multiplied by a spatially radially increasing function centered at the selected point to disadvantage the points in a neighborhood of a radius determined from the size of the common area and the desired number of points. This is repeated until the desired number of points is selected. Results of an ERS1/2 tandem pair are presented and discussed.

  15. Integration of flow studies for robust selection of mechanoresponsive genes.

    PubMed

    Maimari, Nataly; Pedrigi, Ryan M; Russo, Alessandra; Broda, Krysia; Krams, Rob

    2016-03-01

    Blood flow is an essential contributor to plaque growth, composition and initiation. It is sensed by endothelial cells, which react to blood flow by expressing > 1000 genes. The sheer number of genes implies that one needs genomic techniques to unravel their response in disease. Individual genomic studies have been performed but lack sufficient power to identify subtle changes in gene expression. In this study, we investigated whether a systematic meta-analysis of available microarray studies can improve their consistency. We identified 17 studies using microarrays, of which six were performed in vivo and 11 in vitro. The in vivo studies were disregarded due to the lack of the shear profile. Of the in vitro studies, a cross-platform integration of human studies (HUVECs in flow cells) showed high concordance (> 90 %). The human data set identified > 1600 genes to be shear responsive, more than any other study and in this gene set all known mechanosensitive genes and pathways were present. A detailed network analysis indicated a power distribution (e. g. the presence of hubs), without a hierarchical organisation. The average cluster coefficient was high and further analysis indicated an aggregation of 3 and 4 element motifs, indicating a high prevalence of feedback and feed forward loops, similar to prokaryotic cells. In conclusion, this initial study presented a novel method to integrate human-based mechanosensitive studies to increase its power. The robust network was large, contained all known mechanosensitive pathways and its structure revealed hubs, and a large aggregate of feedback and feed forward loops. PMID:26842798

  16. Selecting reliable and robust freshwater macroalgae for biomass applications.

    PubMed

    Lawton, Rebecca J; de Nys, Rocky; Paul, Nicholas A

    2013-01-01

    Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹), lowest ash content (3-8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential

  17. Selecting Reliable and Robust Freshwater Macroalgae for Biomass Applications

    PubMed Central

    Lawton, Rebecca J.; de Nys, Rocky; Paul, Nicholas A.

    2013-01-01

    Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m−2 day−1), lowest ash content (3–8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO2 across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential

  18. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  19. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  20. Enhancement of the accuracy of the ( P-ω) method through the implementation of a nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Kfoury, G. A.; Chalhoub, N. G.; Henein, N. A.; Bryzik, W.

    2006-04-01

    The ( P-ω) method is a model-based approach developed for determining the instantaneous friction torque in internal combustion engines. This scheme requires measurements of the cylinder gas pressure, the engine load torque, the crankshaft angular displacement and its time derivatives. The effects of the higher order dynamics of the crank-slider mechanism on the measured angular motion of the crankshaft have caused the ( P-ω) method to yield erroneous results, especially, at high engine speeds. To alleviate this problem, a nonlinear sliding mode observer has been developed herein to accurately estimate the rigid and flexible motions of the piston-assembly/connecting-rod/crankshaft mechanism of a single cylinder engine. The observer has been designed to yield a robust performance in the presence of disturbances and modeling imprecision. The digital simulation results, generated under transient conditions representing a decrease in the engine speed, have illustrated the rapid convergence of the estimated state variables to the actual ones in the presence of both structured and unstructured uncertainties. Moreover, this study has proven that the use of the estimated rather than the measured angular displacement of the crankshaft and its time derivatives can significantly improve the accuracy of the ( P-ω) method in determining the instantaneous engine friction torque.

  1. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images.

    PubMed

    Chu, Carlton; Hsu, Ai-Ling; Chou, Kun-Hsien; Bandettini, Peter; Lin, Chingpo

    2012-03-01

    There are growing numbers of studies using machine learning approaches to characterize patterns of anatomical difference discernible from neuroimaging data. The high-dimensionality of image data often raises a concern that feature selection is needed to obtain optimal accuracy. Among previous studies, mostly using fixed sample sizes, some show greater predictive accuracies with feature selection, whereas others do not. In this study, we compared four common feature selection methods. 1) Pre-selected region of interests (ROIs) that are based on prior knowledge. 2) Univariate t-test filtering. 3) Recursive feature elimination (RFE), and 4) t-test filtering constrained by ROIs. The predictive accuracies achieved from different sample sizes, with and without feature selection, were compared statistically. To demonstrate the effect, we used grey matter segmented from the T1-weighted anatomical scans collected by the Alzheimer's disease Neuroimaging Initiative (ADNI) as the input features to a linear support vector machine classifier. The objective was to characterize the patterns of difference between Alzheimer's disease (AD) patients and cognitively normal subjects, and also to characterize the difference between mild cognitive impairment (MCI) patients and normal subjects. In addition, we also compared the classification accuracies between MCI patients who converted to AD and MCI patients who did not convert within the period of 12 months. Predictive accuracies from two data-driven feature selection methods (t-test filtering and RFE) were no better than those achieved using whole brain data. We showed that we could achieve the most accurate characterizations by using prior knowledge of where to expect neurodegeneration (hippocampus and parahippocampal gyrus). Therefore, feature selection does improve the classification accuracies, but it depends on the method adopted. In general, larger sample sizes yielded higher accuracies with less advantage obtained by using

  2. Accuracy-rate tradeoffs: how do enzymes meet demands of selectivity and catalytic efficiency?

    PubMed

    Tawfik, Dan S

    2014-08-01

    I discuss some physico-chemical and evolutionary aspects of enzyme accuracy (selectivity, specificity) and speed (turnover rate, processivity). Accuracy can be a beneficial side-product of active-sites being refined to proficiently convert a given substrate into one product. However, exclusion of undesirable, non-cognate substrates is also an explicitly evolved trait that may come with a cost. I define two schematic mechanisms. Ground-state discrimination applies to enzymes where selectivity is achieved primarily at the level of substrate binding. Exemplified by DNA methyltransferases and the ribosome, ground-state discrimination imposes strong accuracy-rate tradeoffs. Alternatively, transition-state discrimination, applies to relatively small substrates where substrate binding and chemistry are efficiently coupled, and evokes weaker tradeoffs. Overall, the mechanistic, structural and evolutionary basis of enzymatic accuracy-rate tradeoffs merits deeper understanding.

  3. Some scale-free networks could be robust under selective node attacks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Huang, Dan; Li, Deyi; Chen, Guisheng; Lan, Wenfei

    2011-04-01

    It is a mainstream idea that scale-free network would be fragile under the selective attacks. Internet is a typical scale-free network in the real world, but it never collapses under the selective attacks of computer viruses and hackers. This phenomenon is different from the deduction of the idea above because this idea assumes the same cost to delete an arbitrary node. Hence this paper discusses the behaviors of the scale-free network under the selective node attack with different cost. Through the experiments on five complex networks, we show that the scale-free network is possibly robust under the selective node attacks; furthermore, the more compact the network is, and the larger the average degree is, then the more robust the network is; with the same average degrees, the more compact the network is, the more robust the network is. This result would enrich the theory of the invulnerability of the network, and can be used to build robust social, technological and biological networks, and also has the potential to find the target of drugs.

  4. On the selection of optimal feature region set for robust digital image watermarking.

    PubMed

    Tsai, Jen-Sheng; Huang, Win-Bin; Kuo, Yau-Hwang

    2011-03-01

    A novel feature region selection method for robust digital image watermarking is proposed in this paper. This method aims to select a nonoverlapping feature region set, which has the greatest robustness against various attacks and can preserve image quality as much as possible after watermarked. It first performs a simulated attacking procedure using some predefined attacks to evaluate the robustness of every candidate feature region. According to the evaluation results, it then adopts a track-with-pruning procedure to search a minimal primary feature set which can resist the most predefined attacks. In order to enhance its resistance to undefined attacks under the constraint of preserving image quality, the primary feature set is then extended by adding into some auxiliary feature regions. This work is formulated as a multidimensional knapsack problem and solved by a genetic algorithm based approach. The experimental results for StirMark attacks on some benchmark images support our expectation that the primary feature set can resist all the predefined attacks and its extension can enhance the robustness against undefined attacks. Comparing with some well-known feature-based methods, the proposed method exhibits better performance in robust digital watermarking.

  5. Effects of implant angulation, material selection, and impression technique on impression accuracy: a preliminary laboratory study.

    PubMed

    Rutkunas, Vygandas; Sveikata, Kestutis; Savickas, Raimondas

    2012-01-01

    The aim of this preliminary laboratory study was to evaluate the effects of 5- and 25-degree implant angulations in simulated clinical casts on an impression's accuracy when using different impression materials and tray selections. A convenience sample of each implant angulation group was selected for both open and closed trays in combination with one polyether and two polyvinyl siloxane impression materials. The influence of material and technique appeared to be significant for both 5- and 25-degree angulations (P < .05), and increased angulation tended to decrease impression accuracy. The open-tray technique was more accurate with highly nonaxially oriented implants for the small sample size investigated.

  6. Robust cyclohexanone selective chemiresistors based on single-walled carbon nanotubes.

    PubMed

    Frazier, Kelvin M; Swager, Timothy M

    2013-08-01

    Functionalized single-walled carbon nanotube (SWCNT)-based chemiresistors are reported for a highly robust and sensitive gas sensor to selectively detect cyclohexanone, a target analyte for explosive detection. The trifunctional selector has three important properties: it noncovalently functionalizes SWCNTs with cofacial π-π interactions, it binds to cyclohexanone via hydrogen bond (mechanistic studies were investigated), and it improves the overall robustness of SWCNT-based chemiresistors (e.g., humidity and heat). Our sensors produced reversible and reproducible responses in less than 30 s to 10 ppm of cyclohexanone and displayed an average theoretical limit of detection (LOD) of 5 ppm.

  7. Simulation-based planning for peacekeeping operations: selection of robust plans

    NASA Astrophysics Data System (ADS)

    Cekova, Cvetelina; Chandrasekaran, B.; Josephson, John; Pantaleev, Aleksandar

    2006-05-01

    This research is part of a proposed shift in emphasis in decision support from optimality to robustness. Computer simulation is emerging as a useful tool in planning courses of action (COAs). Simulations require domain models, but there is an inevitable gap between models and reality - some aspects of reality are not represented at all, and what is represented may contain errors. As models are aggregated from multiple sources, the decision maker is further insulated from even an awareness of model weaknesses. To realize the full power of computer simluations to support decision making, decision support systems should support the planner in exporing the robustness of COAs in the face of potential weaknesses in simulation models. This paper demonstrates a method of exploring the robustness of a COA with respect to specific model assumptions about whose accuracy the decision maker might have concerns. The domain is that of peacekeeping in a country where three differenct demographic groups co-exist in tension. An external peacekeeping force strives to achieve stability, an improved economy, and a higher degree of democracy in the country. A proposed COA for such a force is simluated multiple times while varying the assumptions. A visual data analysis tool is used to explore COA robustness. The aim is to help the decision maker choose a COA that is likely to be successful even in the face of potential errors in the assumptions in the models.

  8. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  9. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  10. Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    PubMed Central

    Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier

    2011-01-01

    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409

  11. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    PubMed Central

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    The Minimum Redundancy Maximum Relevance (MRMR) approach to supervised variable selection represents a successful methodology for dimensionality reduction, which is suitable for high-dimensional data observed in two or more different groups. Various available versions of the MRMR approach have been designed to search for variables with the largest relevance for a classification task while controlling for redundancy of the selected set of variables. However, usual relevance and redundancy criteria have the disadvantages of being too sensitive to the presence of outlying measurements and/or being inefficient. We propose a novel approach called Minimum Regularized Redundancy Maximum Robust Relevance (MRRMRR), suitable for noisy high-dimensional data observed in two groups. It combines principles of regularization and robust statistics. Particularly, redundancy is measured by a new regularized version of the coefficient of multiple correlation and relevance is measured by a highly robust correlation coefficient based on the least weighted squares regression with data-adaptive weights. We compare various dimensionality reduction methods on three real data sets. To investigate the influence of noise or outliers on the data, we perform the computations also for data artificially contaminated by severe noise of various forms. The experimental results confirm the robustness of the method with respect to outliers. PMID:26137474

  12. Accuracy and responses of genomic selection on key traits in apple breeding

    PubMed Central

    Muranty, Hélène; Troggio, Michela; Sadok, Inès Ben; Rifaï, Mehdi Al; Auwerkerken, Annemarie; Banchi, Elisa; Velasco, Riccardo; Stevanato, Piergiorgio; van de Weg, W Eric; Di Guardo, Mario; Kumar, Satish; Laurens, François; Bink, Marco C A M

    2015-01-01

    The application of genomic selection in fruit tree crops is expected to enhance breeding efficiency by increasing prediction accuracy, increasing selection intensity and decreasing generation interval. The objectives of this study were to assess the accuracy of prediction and selection response in commercial apple breeding programmes for key traits. The training population comprised 977 individuals derived from 20 pedigreed full-sib families. Historic phenotypic data were available on 10 traits related to productivity and fruit external appearance and genotypic data for 7829 SNPs obtained with an Illumina 20K SNP array. From these data, a genome-wide prediction model was built and subsequently used to calculate genomic breeding values of five application full-sib families. The application families had genotypes at 364 SNPs from a dedicated 512 SNP array, and these genotypic data were extended to the high-density level by imputation. These five families were phenotyped for 1 year and their phenotypes were compared to the predicted breeding values. Accuracy of genomic prediction across the 10 traits reached a maximum value of 0.5 and had a median value of 0.19. The accuracies were strongly affected by the phenotypic distribution and heritability of traits. In the largest family, significant selection response was observed for traits with high heritability and symmetric phenotypic distribution. Traits that showed non-significant response often had reduced and skewed phenotypic variation or low heritability. Among the five application families the accuracies were uncorrelated to the degree of relatedness to the training population. The results underline the potential of genomic prediction to accelerate breeding progress in outbred fruit tree crops that still need to overcome long generation intervals and extensive phenotyping costs. PMID:26744627

  13. Estimation of accuracies and expected genetic change from selection for selection indexes that use multiple-trait predictions of breeding values.

    PubMed

    Barwick, S A; Tier, B; Swan, A A; Henzell, A L

    2013-10-01

    Procedures are described for estimating selection index accuracies for individual animals and expected genetic change from selection for the general case where indexes of EBVs predict an aggregate breeding objective of traits that may or may not have been measured. Index accuracies for the breeding objective are shown to take an important general form, being able to be expressed as the product of the accuracy of the index function of true breeding values and the accuracy with which that function predicts the breeding objective. When the accuracies of the individual EBVs of the index are known, prediction error variances (PEVs) and covariances (PECs) for the EBVs within animal are able to be well approximated, and index accuracies and expected genetic change from selection estimated with high accuracy. The procedures are suited to routine use in estimating index accuracies in genetic evaluation, and for providing important information, without additional modelling, on the directions in which a population will move under selection.

  14. The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures.

    PubMed

    Haury, Anne-Claire; Gestraud, Pierre; Vert, Jean-Philippe

    2011-01-01

    Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. In this study we compare 32 feature selection methods on 4 public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Surprisingly, complex wrapper and embedded methods generally do not outperform simple univariate feature selection methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results.

  15. Assessing the accuracy of selectivity as a basis for solvent screening in extractive distillation processes

    SciTech Connect

    Momoh, S.O. )

    1991-01-01

    An important parameter for consideration in the screening of solvents for an extractive distillation process is selectivity at infinite dilution. The higher the selectivity, the better the solvent. This paper assesses the accuracy of using selectivity as a basis for solvent screening in extractive distillation processes. Three types of binary mixtures that are usually separated by an extractive distillation process are chosen for investigation. Having determined the optimum solvent feed rate to be two times the feed rate of the binary mixture, the total annual costs of extractive distillation processes for each of the chosen mixtures and for various solvents are carried out. The solvents are ranked on the basis of the total annual cost (obtained by design and costing equations) for the extractive distillation processes, and this ranking order is compared with that of selectivity at infinite dilution as determined by the UNIFAC method. This matching of selectivity with total annual cost does not produce a very good correlation.

  16. Sexual reproduction selects for robustness and negative epistasis in artificial gene networks.

    PubMed

    Azevedo, Ricardo B R; Lohaus, Rolf; Srinivasan, Suraj; Dang, Kristen K; Burch, Christina L

    2006-03-01

    The mutational deterministic hypothesis for the origin and maintenance of sexual reproduction posits that sex enhances the ability of natural selection to purge deleterious mutations after recombination brings them together into single genomes. This explanation requires negative epistasis, a type of genetic interaction where mutations are more harmful in combination than expected from their separate effects. The conceptual appeal of the mutational deterministic hypothesis has been offset by our inability to identify the mechanistic and evolutionary bases of negative epistasis. Here we show that negative epistasis can evolve as a consequence of sexual reproduction itself. Using an artificial gene network model, we find that recombination between gene networks imposes selection for genetic robustness, and that negative epistasis evolves as a by-product of this selection. Our results suggest that sexual reproduction selects for conditions that favour its own maintenance, a case of evolution forging its own path.

  17. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information.

    PubMed

    Mortazavi, Atiyeh; Moattar, Mohammad Hossein

    2016-01-01

    High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches. PMID:27127506

  18. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information.

    PubMed

    Mortazavi, Atiyeh; Moattar, Mohammad Hossein

    2016-01-01

    High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  19. Hybrid formulation of the model-based non-rigid registration problem to improve accuracy and robustness.

    PubMed

    Clatz, Olivier; Delingette, Hervé; Talos, Ion-Florin; Golby, Alexandra J; Kikinis, Ron; Jolesz, Ferenc A; Ayache, Nicholas; Warfield, Simon K

    2005-01-01

    We present a new algorithm to register 3D pre-operative Magnetic Resonance (MR) images with intra-operative MR images of the brain. This algorithm relies on a robust estimation of the deformation from a sparse set of measured displacements. We propose a new framework to compute iteratively the displacement field starting from an approximation formulation (minimizing the sum of a regularization term and a data error term) and converging toward an interpolation formulation (least square minimization of the data error term). The robustness of the algorithm is achieved through the introduction of an outliers rejection step in this gradual registration process. We ensure the validity of the deformation by the use of a biomechanical model of the brain specific to the patient, discretized with the finite element method. The algorithm has been tested on six cases of brain tumor resection, presenting a brain shift up to 13 mm.

  20. Traditional and robust vector selection methods for use with similarity based models

    SciTech Connect

    Hines, J. W.; Garvey, D. R.

    2006-07-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  1. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision.

  2. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision. PMID:20220249

  3. Robust selectivity for faces in the human amygdala in the absence of expressions.

    PubMed

    Mende-Siedlecki, Peter; Verosky, Sara C; Turk-Browne, Nicholas B; Todorov, Alexander

    2013-12-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region.

  4. Comparative accuracy of the Albedo, transmission and absorption for selected radiative transfer approximations

    NASA Technical Reports Server (NTRS)

    King, M. D.; HARSHVARDHAN

    1986-01-01

    Illustrations of both the relative and absolute accuracy of eight different radiative transfer approximations as a function of optical thickness, solar zenith angle and single scattering albedo are given. Computational results for the plane albedo, total transmission and fractional absorption were obtained for plane-parallel atmospheres composed of cloud particles. These computations, which were obtained using the doubling method, are compared with comparable results obtained using selected radiative transfer approximations. Comparisons were made between asymptotic theory for thick layers and the following widely used two stream approximations: Coakley-Chylek's models 1 and 2, Meador-Weaver, Eddington, delta-Eddington, PIFM and delta-discrete ordinates.

  5. Predicted accuracy of and response to genomic selection for new traits in dairy cattle.

    PubMed

    Calus, M P L; de Haas, Y; Pszczola, M; Veerkamp, R F

    2013-02-01

    Genomic selection relaxes the requirement of traditional selection tools to have phenotypic measurements on close relatives of all selection candidates. This opens up possibilities to select for traits that are difficult or expensive to measure. The objectives of this paper were to predict accuracy of and response to genomic selection for a new trait, considering that only a cow reference population of moderate size was available for the new trait, and that selection simultaneously targeted an index and this new trait. Accuracy for and response to selection were deterministically evaluated for three different breeding goals. Single trait selection for the new trait based only on a limited cow reference population of up to 10 000 cows, showed that maximum genetic responses of 0.20 and 0.28 genetic standard deviation (s.d.) per year can be achieved for traits with a heritability of 0.05 and 0.30, respectively. Adding information from the index based on a reference population of 5000 bulls, and assuming a genetic correlation of 0.5, increased genetic response for both heritability levels by up to 0.14 genetic s.d. per year. The scenario with simultaneous selection for the new trait and the index, yielded a substantially lower response for the new trait, especially when the genetic correlation with the index was negative. Despite the lower response for the index, whenever the new trait had considerable economic value, including the cow reference population considerably improved the genetic response for the new trait. For scenarios with a zero or negative genetic correlation with the index and equal economic value for the index and the new trait, a reference population of 2000 cows increased genetic response for the new trait with at least 0.10 and 0.20 genetic s.d. per year, for heritability levels of 0.05 and 0.30, respectively. We conclude that for new traits with a very small or positive genetic correlation with the index, and a high positive economic value

  6. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  7. The Accuracy, Robustness and Relationships Among Correlational Models for Social Analysis: A Monte Carlo Simulation. An Occasional Paper.

    ERIC Educational Resources Information Center

    Rutherford, Brent M.

    A large number of correlational models for cross-tabular analysis are available for utilization by social scientists for data description. Criteria for selection (such as levels of measurement and proportional reduction in error) do not lead to conclusive model choice. Moreover, such criteria may be irrelevant. More pertinent criteria are…

  8. Individual variation in exploratory behaviour improves speed and accuracy of collective nest selection by Argentine ants

    PubMed Central

    Hui, Ashley; Pinter-Wollman, Noa

    2014-01-01

    Collective behaviours are influenced by the behavioural composition of the group. For example, a collective behaviour may emerge from the average behaviour of the group's constituents, or be driven by a few key individuals that catalyse the behaviour of others in the group. When ant colonies collectively relocate to a new nest site, there is an inherent trade-off between the speed and accuracy of their decision of where to move due to the time it takes to gather information. Thus, variation among workers in exploratory behaviour, which allows gathering information about potential new nest sites, may impact the ability of a colony to move quickly into a suitable new nest. The invasive Argentine ant, Linepithema humile, expands its range locally through the dispersal and establishment of propagules: groups of ants and queens. We examine whether the success of these groups in rapidly finding a suitable nest site is affected by their behavioural composition. We compared nest choice speed and accuracy among groups of all-exploratory, all-nonexploratory and half-exploratory–half-nonexploratory individuals. We show that exploratory individuals improve both the speed and accuracy of collective nest choice, and that exploratory individuals have additive, not synergistic, effects on nest site selection. By integrating an examination of behaviour into the study of invasive species we shed light on the mechanisms that impact the progression of invasion. PMID:25018558

  9. Theory-assisted development of a robust and Z-selective olefin metathesis catalyst.

    PubMed

    Occhipinti, Giovanni; Koudriavtsev, Vitali; Törnroos, Karl W; Jensen, Vidar R

    2014-08-01

    DFT calculations have predicted a new, highly Z-selective ruthenium-based olefin metathesis catalyst that is considerably more robust than the recently reported (SIMes)(Cl)(RS)RuCH(o-OiPrC6H4) (3a, SIMes = 1,3-dimesityl-4,5-dihydroimidazol-2-ylidene, R = 2,4,6-triphenylbenzene) [J. Am. Chem. Soc., 2013, 135, 3331]. Replacing the chloride of 3a by an isocyanate ligand to give 5a was predicted to increase the stability of the complex considerably, at the same time moderately improving the Z-selectivity. Compound 5a is easily prepared in a two-step synthesis starting from the Hoveyda-Grubbs second-generation catalyst 3. In agreement with the calculations, the isocyanate-substituted 5a appears to be somewhat more Z-selective than the chloride analogue 3a. More importantly, 5a can be used in air, with unpurified and non-degassed substrates and solvents, and in the presence of acids. These are traits that are unprecedented among highly Z-selective olefin metathesis catalysts and also very promising with respect to applications of the new catalyst. PMID:24788021

  10. Theory-assisted development of a robust and Z-selective olefin metathesis catalyst.

    PubMed

    Occhipinti, Giovanni; Koudriavtsev, Vitali; Törnroos, Karl W; Jensen, Vidar R

    2014-08-01

    DFT calculations have predicted a new, highly Z-selective ruthenium-based olefin metathesis catalyst that is considerably more robust than the recently reported (SIMes)(Cl)(RS)RuCH(o-OiPrC6H4) (3a, SIMes = 1,3-dimesityl-4,5-dihydroimidazol-2-ylidene, R = 2,4,6-triphenylbenzene) [J. Am. Chem. Soc., 2013, 135, 3331]. Replacing the chloride of 3a by an isocyanate ligand to give 5a was predicted to increase the stability of the complex considerably, at the same time moderately improving the Z-selectivity. Compound 5a is easily prepared in a two-step synthesis starting from the Hoveyda-Grubbs second-generation catalyst 3. In agreement with the calculations, the isocyanate-substituted 5a appears to be somewhat more Z-selective than the chloride analogue 3a. More importantly, 5a can be used in air, with unpurified and non-degassed substrates and solvents, and in the presence of acids. These are traits that are unprecedented among highly Z-selective olefin metathesis catalysts and also very promising with respect to applications of the new catalyst.

  11. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  12. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  13. Optimized multiple-quantum filter for robust selective excitation of metabolite signals

    NASA Astrophysics Data System (ADS)

    Holbach, Mirjam; Lambert, Jörg; Suter, Dieter

    2014-06-01

    The selective excitation of metabolite signals in vivo requires the use of specially adapted pulse techniques, in particular when the signals are weak and the resonances overlap with those of unwanted molecules. Several pulse sequences have been proposed for this spectral editing task. However, their performance is strongly degraded by unavoidable experimental imperfections. Here, we show that optimal control theory can be used to generate pulses and sequences that perform almost ideally over a range of rf field strengths and frequency offsets that can be chosen according to the specifics of the spectrometer or scanner being used. We demonstrate this scheme by applying it to lactate editing. In addition to the robust excitation, we also have designed the pulses to minimize the signal of unwanted molecular species.

  14. Group-Item and Directed Scanning: Examining Preschoolers' Accuracy and Efficiency in Two Augmentative Communication Symbol Selection Methods

    ERIC Educational Resources Information Center

    White, Aubrey Randall; Carney, Edward; Reichle, Joe

    2010-01-01

    Purpose: The current investigation compared directed scanning and group-item scanning among typically developing 4-year-old children. Of specific interest were their accuracy, selection speed, and efficiency of cursor movement in selecting colored line drawn symbols representing object vocabulary. Method: Twelve 4-year-olds made selections in both…

  15. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection.

    PubMed

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-06-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96.

  16. A robust rerank approach for feature selection and its application to pooling-based GWA studies.

    PubMed

    Liu, Jia-Rou; Kuo, Po-Hsiu; Hung, Hung

    2013-01-01

    Large-p-small-n datasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances in t-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of "rank-over-variable." Techniques of "random subset" and "rerank" are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-n datasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method. PMID:23653667

  17. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  18. Estimation of genetic parameters and breeding values across challenged environments to select for robust pigs.

    PubMed

    Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A

    2015-04-01

    Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments

  19. Effects of machining accuracy on frequency response properties of thick-screen frequency selective surface

    NASA Astrophysics Data System (ADS)

    Fang, Chunyi; Gao, Jinsong; Xin, Chen

    2012-10-01

    Electromagnetic theory shows that a thick-screen frequency selective surface (FSS) has many advantages in its frequency response characteristics. In addition, it can be used to make a stealth radome. Therefore, we research in detail how machining accuracy affects the frequency response properties of the FSS in the gigahertz range. Specifically, by using the least squares method applied to machining data, the effects of different machining precision in the samples can be calculated thus obtaining frequency response curves which were verified by testing in the near-field in a microwave dark room. The results show that decreasing roughness and flatness variation leads to an increase in the bandwidth and that an increase in spacing error leads to the center frequency drifting lower. Finally, an increase in aperture error leads to an increase in bandwidth. Therefore, the conclusion is that machining accuracy should be controlled and that a spatial error less than 0.05 mm is required in order to avoid unwanted center frequency drift and a transmittance decrease.

  20. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  1. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  2. A robust binary supramolecular organic framework (SOF) with high CO2 adsorption and selectivity.

    PubMed

    Lü, Jian; Perez-Krap, Cristina; Suyetin, Mikhail; Alsmail, Nada H; Yan, Yong; Yang, Sihai; Lewis, William; Bichoutskaia, Elena; Tang, Chiu C; Blake, Alexander J; Cao, Rong; Schröder, Martin

    2014-09-17

    A robust binary hydrogen-bonded supramolecular organic framework (SOF-7) has been synthesized by solvothermal reaction of 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)dihydropyridyl)benzene (1) and 5,5'-bis-(azanediyl)-oxalyl-diisophthalic acid (2). Single crystal X-ray diffraction analysis shows that SOF-7 comprises 2 and 1,4-bis-(4-(3,5-dicyano-2,6-dipyridyl)pyridyl)benzene (3); the latter formed in situ from the oxidative dehydrogenation of 1. SOF-7 shows a three-dimensional four-fold interpenetrated structure with complementary O-H···N hydrogen bonds to form channels that are decorated with cyano and amide groups. SOF-7 exhibits excellent thermal stability and solvent and moisture durability as well as permanent porosity. The activated desolvated material SOF-7a shows high CO2 adsorption capacity and selectivity compared with other porous organic materials assembled solely through hydrogen bonding. PMID:25184689

  3. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    PubMed

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-04-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  4. Increased Prediction Accuracy in Wheat Breeding Trials Using a Marker × Environment Interaction Genomic Selection Model

    PubMed Central

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P.; Autrique, Enrique; de los Campos, Gustavo

    2015-01-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT’s research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis

  5. Robust Selection of Cancer Survival Signatures from High-Throughput Genomic Data Using Two-Fold Subsampling

    PubMed Central

    Lee, Sangkyun; Rahnenführer, Jörg; Lang, Michel; De Preter, Katleen; Mestdagh, Pieter; Koster, Jan; Versteeg, Rogier; Stallings, Raymond L.; Varesio, Luigi; Asgharzadeh, Shahab; Schulte, Johannes H.; Fielitz, Kathrin; Schwermer, Melanie; Morik, Katharina; Schramm, Alexander

    2014-01-01

    Identifying relevant signatures for clinical patient outcome is a fundamental task in high-throughput studies. Signatures, composed of features such as mRNAs, miRNAs, SNPs or other molecular variables, are often non-overlapping, even though they have been identified from similar experiments considering samples with the same type of disease. The lack of a consensus is mostly due to the fact that sample sizes are far smaller than the numbers of candidate features to be considered, and therefore signature selection suffers from large variation. We propose a robust signature selection method that enhances the selection stability of penalized regression algorithms for predicting survival risk. Our method is based on an aggregation of multiple, possibly unstable, signatures obtained with the preconditioned lasso algorithm applied to random (internal) subsamples of a given cohort data, where the aggregated signature is shrunken by a simple thresholding strategy. The resulting method, RS-PL, is conceptually simple and easy to apply, relying on parameters automatically tuned by cross validation. Robust signature selection using RS-PL operates within an (external) subsampling framework to estimate the selection probabilities of features in multiple trials of RS-PL. These probabilities are used for identifying reliable features to be included in a signature. Our method was evaluated on microarray data sets from neuroblastoma, lung adenocarcinoma, and breast cancer patients, extracting robust and relevant signatures for predicting survival risk. Signatures obtained by our method achieved high prediction performance and robustness, consistently over the three data sets. Genes with high selection probability in our robust signatures have been reported as cancer-relevant. The ordering of predictor coefficients associated with signatures was well-preserved across multiple trials of RS-PL, demonstrating the capability of our method for identifying a transferable consensus signature

  6. Accuracy of genomic selection for age at puberty in a multi-breed population of tropically adapted beef cattle.

    PubMed

    Farah, M M; Swan, A A; Fortes, M R S; Fonseca, R; Moore, S S; Kelly, M J

    2016-02-01

    Genomic selection is becoming a standard tool in livestock breeding programs, particularly for traits that are hard to measure. Accuracy of genomic selection can be improved by increasing the quantity and quality of data and potentially by improving analytical methods. Adding genotypes and phenotypes from additional breeds or crosses often improves the accuracy of genomic predictions but requires specific methodology. A model was developed to incorporate breed composition estimated from genotypes into genomic selection models. This method was applied to age at puberty data in female beef cattle (as estimated from age at first observation of a corpus luteum) from a mix of Brahman and Tropical Composite beef cattle. In this dataset, the new model incorporating breed composition did not increase the accuracy of genomic selection. However, the breeding values exhibited slightly less bias (as assessed by deviation of regression of phenotype on genomic breeding values from the expected value of 1). Adding additional Brahman animals to the Tropical Composite analysis increased the accuracy of genomic predictions and did not affect the accuracy of the Brahman predictions. PMID:26490440

  7. Robust fetal QRS detection from noninvasive abdominal electrocardiogram based on channel selection and simultaneous multichannel processing.

    PubMed

    Ghaffari, Ali; Mollakazemi, Mohammad Javad; Atyabi, Seyyed Abbas; Niknazar, Mohammad

    2015-12-01

    The purpose of this study is to provide a new method for detecting fetal QRS complexes from non-invasive fetal electrocardiogram (fECG) signal. Despite most of the current fECG processing methods which are based on separation of fECG from maternal ECG (mECG), in this study, fetal heart rate (FHR) can be extracted with high accuracy without separation of fECG from mECG. Furthermore, in this new approach thoracic channels are not necessary. These two aspects have reduced the required computational operations. Consequently, the proposed approach can be efficiently applied to different real-time healthcare and medical devices. In this work, a new method is presented for selecting the best channel which carries strongest fECG. Each channel is scored based on two criteria of noise distribution and good fetal heartbeat visibility. Another important aspect of this study is the simultaneous and combinatorial use of available fECG channels via the priority given by their scores. A combination of geometric features and wavelet-based techniques was adopted to extract FHR. Based on fetal geometric features, fECG signals were divided into three categories, and different strategies were employed to analyze each category. The method was validated using three datasets including Noninvasive fetal ECG database, DaISy and PhysioNet/Computing in Cardiology Challenge 2013. Finally, the obtained results were compared with other studies. The adopted strategies such as multi-resolution analysis, not separating fECG and mECG, intelligent channels scoring and using them simultaneously are the factors that caused the promising performance of the method. PMID:26462679

  8. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  9. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity.

    PubMed

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-01-01

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode. PMID:26153773

  10. Optimised sensor selection for control and fault tolerance of electromagnetic suspension systems: a robust loop shaping approach.

    PubMed

    Michail, Konstantinos; Zolotas, Argyrios C; Goodall, Roger M

    2014-01-01

    This paper presents a systematic design framework for selecting the sensors in an optimised manner, simultaneously satisfying a set of given complex system control requirements, i.e. optimum and robust performance as well as fault tolerant control for high integrity systems. It is worth noting that optimum sensor selection in control system design is often a non-trivial task. Among all candidate sensor sets, the algorithm explores and separately optimises system performance with all the feasible sensor sets in order to identify fallback options under single or multiple sensor faults. The proposed approach combines modern robust control design, fault tolerant control, multiobjective optimisation and Monte Carlo techniques. Without loss of generality, it's efficacy is tested on an electromagnetic suspension system via appropriate realistic simulations. PMID:24041402

  11. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity

    PubMed Central

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-01-01

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode. PMID:26153773

  12. Curved Microneedle Array-Based sEMG Electrode for Robust Long-Term Measurements and High Selectivity.

    PubMed

    Kim, Minjae; Kim, Taewan; Kim, Dong Sung; Chung, Wan Kyun

    2015-07-06

    Surface electromyography is widely used in many fields to infer human intention. However, conventional electrodes are not appropriate for long-term measurements and are easily influenced by the environment, so the range of applications of sEMG is limited. In this paper, we propose a flexible band-integrated, curved microneedle array electrode for robust long-term measurements, high selectivity, and easy applicability. Signal quality, in terms of long-term usability and sensitivity to perspiration, was investigated. Its motion-discriminating performance was also evaluated. The results show that the proposed electrode is robust to perspiration and can maintain a high-quality measuring ability for over 8 h. The proposed electrode also has high selectivity for motion compared with a commercial wet electrode and dry electrode.

  13. Screening Accuracy of Level 2 Autism Spectrum Disorder Rating Scales: A Review of Selected Instruments

    ERIC Educational Resources Information Center

    Norris, Megan; Lecavalier, Luc

    2010-01-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be…

  14. Genomic selection accuracy for grain quality traits in biparental wheat populations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) is a promising tool for plant and animal breeding that uses genome wide molecular marker data to capture small and large effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction ...

  15. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    NASA Astrophysics Data System (ADS)

    van de Schoot, A. J. A. J.; Visser, J.; van Kesteren, Z.; Janssen, T. M.; Rasch, C. R. N.; Bel, A.

    2016-02-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D99%) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D99%, rectum V30Gy and bladder V40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D99% on average by 0.2 Gy and decreased the median rectum V30Gy and median bladder V40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in terms of

  16. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison.

    PubMed

    van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A

    2016-02-21

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal

  17. Accuracy of initial codon selection by aminoacyl-tRNAs on the mRNA-programmed bacterial ribosome

    PubMed Central

    Zhang, Jingji; Ieong, Ka-Weng; Johansson, Magnus; Ehrenberg, Måns

    2015-01-01

    We used a cell-free system with pure Escherichia coli components to study initial codon selection of aminoacyl-tRNAs in ternary complex with elongation factor Tu and GTP on messenger RNA-programmed ribosomes. We took advantage of the universal rate-accuracy trade-off for all enzymatic selections to determine how the efficiency of initial codon readings decreased linearly toward zero as the accuracy of discrimination against near-cognate and wobble codon readings increased toward the maximal asymptote, the d value. We report data on the rate-accuracy variation for 7 cognate, 7 wobble, and 56 near-cognate codon readings comprising about 15% of the genetic code. Their d values varied about 400-fold in the 200–80,000 range depending on type of mismatch, mismatch position in the codon, and tRNA isoacceptor type. We identified error hot spots (d = 200) for U:G misreading in second and U:U or G:A misreading in third codon position by His-tRNAHis and, as also seen in vivo, Glu-tRNAGlu. We suggest that the proofreading mechanism has evolved to attenuate error hot spots in initial selection such as those found here. PMID:26195797

  18. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    PubMed Central

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection. PMID:27190509

  19. Combination of Sleeping Beauty transposition and chemically induced dimerization selection for robust production of engineered cells

    PubMed Central

    Kacherovsky, Nataly; Harkey, Michael A.; Blau, C. Anthony; Giachelli, Cecilia M.; Pun, Suzie H.

    2012-01-01

    The main methods for producing genetically engineered cells use viral vectors for which safety issues and manufacturing costs remain a concern. In addition, selection of desired cells typically relies on the use of cytotoxic drugs with long culture times. Here, we introduce an efficient non-viral approach combining the Sleeping Beauty (SB) Transposon System with selective proliferation of engineered cells by chemically induced dimerization (CID) of growth factor receptors. Minicircles carrying a SB transposon cassette containing a reporter transgene and a gene for the F36VFGFR1 fusion protein were delivered to the hematopoietic cell line Ba/F3. Stably-transduced Ba/F3 cell populations with >98% purity were obtained within 1 week using this positive selection strategy. Copy number analysis by quantitative PCR (qPCR) revealed that CID-selected cells contain on average higher copy numbers of transgenes than flow cytometry-selected cells, demonstrating selective advantage for cells with multiple transposon insertions. A diverse population of cells is present both before and after culture in CID media, although site-specific qPCR of transposon junctions show that population diversity is significantly reduced after selection due to preferential expansion of clones with multiple integration events. This non-viral, positive selection approach is an attractive alternative for producing engineered cells. PMID:22402491

  20. The effect of tray selection on the accuracy of elastomeric impression materials.

    PubMed

    Gordon, G E; Johnson, G H; Drennon, D G

    1990-01-01

    This study evaluated the accuracy of reproduction of stone casts made from impressions using different tray and impression materials. The tray materials used were an acrylic resin, a thermoplastic, and a plastic. The impression materials used were an additional silicone, a polyether, and a polysulfide. Impressions were made of a stainless steel master die that simulated crown preparations for a fixed partial denture and an acrylic resin model with cross-arch and anteroposterior landmarks in stainless steel that typify clinical intra-arch distances. Impressions of the fixed partial denture simulation were made with all three impression materials and all three tray types. Impressions of the cross-arch and anteroposterior landmarks were made by using all three tray types with only the addition reaction silicone impression material. Impressions were poured at 1 hour with a type IV dental stone. Data were analyzed by using ANOVA with a sample size of five. Results indicated that custom-made trays of acrylic resin and the thermoplastic material performed similarly regarding die accuracy and produced clinically acceptable casts. The stock plastic tray consistently produced casts with greater dimensional change than the two custom trays. PMID:2404101

  1. Genetic model selection in genome-wide association studies: robust methods and the use of meta-analysis.

    PubMed

    Bagos, Pantelis G

    2013-06-01

    In genetic association studies (GAS) as well as in genome-wide association studies (GWAS), the mode of inheritance (dominant, additive and recessive) is usually not known a priori. Assuming an incorrect mode of inheritance may lead to substantial loss of power, whereas on the other hand, testing all possible models may result in an increased type I error rate. The situation is even more complicated in the meta-analysis of GAS or GWAS, in which individual studies are synthesized to derive an overall estimate. Meta-analysis increases the power to detect weak genotype effects, but heterogeneity and incompatibility between the included studies complicate things further. In this review, we present a comprehensive summary of the statistical methods used for robust analysis and genetic model selection in GAS and GWAS. We then discuss the application of such methods in the context of meta-analysis. We describe the theoretical properties of the various methods and the foundations on which they are based. We also present the available software implementations of the described methods. Finally, since only few of the available robust methods have been applied in the meta-analysis setting, we present some simple extensions that allow robust meta-analysis of GAS and GWAS. Possible extensions and proposals for future work are also discussed.

  2. Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps.

    PubMed

    Jacobs, Guy S; Sluckin, Tim J; Kivisild, Toomas

    2016-08-01

    During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly's [Formula: see text] and [Formula: see text]) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly's [Formula: see text] offers high power, but is outperformed by a novel statistic that we test, which we call [Formula: see text] We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics-[Formula: see text] Kelly's [Formula: see text] and [Formula: see text]-are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While [Formula: see text] replicates most candidates when recombination map data are not available, the [Formula: see text] and [Formula: see text] statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available. PMID:27516617

  3. Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps.

    PubMed

    Jacobs, Guy S; Sluckin, Tim J; Kivisild, Toomas

    2016-08-01

    During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly's [Formula: see text] and [Formula: see text]) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly's [Formula: see text] offers high power, but is outperformed by a novel statistic that we test, which we call [Formula: see text] We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics-[Formula: see text] Kelly's [Formula: see text] and [Formula: see text]-are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While [Formula: see text] replicates most candidates when recombination map data are not available, the [Formula: see text] and [Formula: see text] statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available.

  4. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genome-wide molecular markers are readily being applied to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorp...

  5. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Sequence-based genotyping methods can generate de novo large numbers of markers. However, without a reference genome, these markers are unordered and typically have a lar...

  6. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...

  7. Robust expression of heterologous genes by selection marker fusion system in improved Chlamydomonas strains.

    PubMed

    Kong, Fantao; Yamasaki, Tomohito; Kurniasih, Sari Dewi; Hou, Liyuan; Li, Xiaobo; Ivanova, Nina; Okada, Shigeru; Ohama, Takeshi

    2015-09-01

    Chlamydomonas is a very attractive candidate plant cell factory. However, its main drawback is the difficulty to find the transformants that robustly express heterologous genes randomly inserted in the nuclear genome. We previously showed that domestic squalene synthase (SQS) gene of Chlamydomonas was much more efficiently overexpressed in a mutant strain [UV-mediated mutant (UVM) 4] than in wild type. In this study, we evaluated the possibility of a new mutant strain, met1, which contains a tag in the maintenance type methyltransferase gene that is expected to play a key role in the maintenance of transcriptional gene silencing. The versatile usefulness of the UVM4 strain to express heterologous genes was also analyzed. We failed to overexpress CrSSL3 cDNA, which is the codon-adjusted squalene synthase-like gene originated from Botryococcus braunii, using the common expression cassette in the wild-type CC-1690 and UVM4 strains. However, we succeeded in isolating western blot-positive transformants through the combinational use of the UVM4 strain and ble2A expression system of which expression cassette bears a fused ORF of the target gene and the antibiotic resistance gene ble via the foot-and-mouth disease virus (FMDV) self-cleaving 2A sequence. It is noteworthy that even with this system, huge deviations in the accumulated protein levels were still observed among the UVM4 transformants. PMID:25660568

  8. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  9. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  10. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  11. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  12. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  13. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  14. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  15. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  16. Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization

    NASA Astrophysics Data System (ADS)

    Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.

    2015-10-01

    The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.

  17. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  18. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  19. The Role of Some Selected Psychological and Personality Traits of the Rater in the Accuracy of Self- and Peer-Assessment

    ERIC Educational Resources Information Center

    AlFallay, Ibrahim

    2004-01-01

    This paper investigates the role of some selected psychological and personality traits of learners of English as a foreign language in the accuracy of self- and peer-assessments. The selected traits were motivation types, self-esteem, anxiety, motivational intensity, and achievement. 78 students of English as a foreign language participated in…

  20. Mitigating arsenic crisis in the developing world: role of robust, reusable and selective hybrid anion exchanger (HAIX).

    PubMed

    German, Michael; Seingheng, Hul; SenGupta, Arup K

    2014-08-01

    In trying to address the public health crisis from the lack of potable water, millions of tube wells have been installed across the world. From these tube wells, natural groundwater contamination from arsenic regularly puts at risk the health of over 100 million people in South and Southeast Asia. Although there have been many research projects, awards and publications, appropriate treatment technology has not been matched to ground level realities and water solutions have not scaled to reach millions of people. For thousands of people from Nepal to India to Cambodia, hybrid anion exchange (HAIX) resins have provided arsenic-safe water for up to nine years. Synthesis of HAIX resins has been commercialized and they are now available globally. Robust, reusable and arsenic-selective, HAIX has been in operation in rural communities over numerous cycles of exhaustion-regeneration. All necessary testing and system maintenance is organized by community-level water staff. Removed arsenic is safely stored in a scientifically and environmentally appropriate manner to prevent future hazards to animals or people. Recent installations have shown the profitability of HAIX-based arsenic treatment, with capital payback periods of only two years in ideal locations. With an appropriate implementation model, HAIX-based treatment can rapidly scale and provide arsenic-safe water to at-risk populations.

  1. Orthogonal Selection and Fixing of Coordination Self-Assembly Pathways for Robust Metallo-organic Ensemble Construction.

    PubMed

    Burke, Michael J; Nichol, Gary S; Lusby, Paul J

    2016-07-27

    Supramolecular construction strategies have overwhelmingly relied on the principles of thermodynamic control. While this approach has yielded an incredibly diverse and striking collection of ensembles, there are downsides, most obviously the necessity to trade-off reversibility against structural integrity. Herein we describe an alternative "assembly-followed-by-fixing" approach that possesses the high-yielding, atom-efficient advantages of reversible self-assembly reactions, yet gives structures that possess a covalent-like level of kinetic robustness. We have chosen to exemplify these principles in the preparation of a series of M2L3 helicates and M4L6 tetrahedra. While the rigidity of various bis(bidentate) ligands causes the larger species to be energetically preferred, we are able to freeze the self-assembly process under "non-ambient" conditions, to selectivity give the disfavored M2L3 helicates. We also demonstrate "kinetic-stimuli" (redox and light)-induced switching between architectures, notably reconstituting the lower energy tetrahedra into highly distorted helicates. PMID:27351912

  2. Mitigating arsenic crisis in the developing world: role of robust, reusable and selective hybrid anion exchanger (HAIX).

    PubMed

    German, Michael; Seingheng, Hul; SenGupta, Arup K

    2014-08-01

    In trying to address the public health crisis from the lack of potable water, millions of tube wells have been installed across the world. From these tube wells, natural groundwater contamination from arsenic regularly puts at risk the health of over 100 million people in South and Southeast Asia. Although there have been many research projects, awards and publications, appropriate treatment technology has not been matched to ground level realities and water solutions have not scaled to reach millions of people. For thousands of people from Nepal to India to Cambodia, hybrid anion exchange (HAIX) resins have provided arsenic-safe water for up to nine years. Synthesis of HAIX resins has been commercialized and they are now available globally. Robust, reusable and arsenic-selective, HAIX has been in operation in rural communities over numerous cycles of exhaustion-regeneration. All necessary testing and system maintenance is organized by community-level water staff. Removed arsenic is safely stored in a scientifically and environmentally appropriate manner to prevent future hazards to animals or people. Recent installations have shown the profitability of HAIX-based arsenic treatment, with capital payback periods of only two years in ideal locations. With an appropriate implementation model, HAIX-based treatment can rapidly scale and provide arsenic-safe water to at-risk populations. PMID:24321388

  3. Atrial-like cardiomyocytes from human pluripotent stem cells are a robust preclinical model for assessing atrial-selective pharmacology

    PubMed Central

    Devalla, Harsha D; Schwach, Verena; Ford, John W; Milnes, James T; El-Haou, Said; Jackson, Claire; Gkatzis, Konstantinos; Elliott, David A; Chuva de Sousa Lopes, Susana M; Mummery, Christine L; Verkerk, Arie O; Passier, Robert

    2015-01-01

    Drugs targeting atrial-specific ion channels, Kv1.5 or Kir3.1/3.4, are being developed as new therapeutic strategies for atrial fibrillation. However, current preclinical studies carried out in non-cardiac cell lines or animal models may not accurately represent the physiology of a human cardiomyocyte (CM). In the current study, we tested whether human embryonic stem cell (hESC)-derived atrial CMs could predict atrial selectivity of pharmacological compounds. By modulating retinoic acid signaling during hESC differentiation, we generated atrial-like (hESC-atrial) and ventricular-like (hESC-ventricular) CMs. We found the expression of atrial-specific ion channel genes, KCNA5 (encoding Kv1.5) and KCNJ3 (encoding Kir 3.1), in hESC-atrial CMs and further demonstrated that these ion channel genes are regulated by COUP-TF transcription factors. Moreover, in response to multiple ion channel blocker, vernakalant, and Kv1.5 blocker, XEN-D0101, hESC-atrial but not hESC-ventricular CMs showed action potential (AP) prolongation due to a reduction in early repolarization. In hESC-atrial CMs, XEN-R0703, a novel Kir3.1/3.4 blocker restored the AP shortening caused by CCh. Neither CCh nor XEN-R0703 had an effect on hESC-ventricular CMs. In summary, we demonstrate that hESC-atrial CMs are a robust model for pre-clinical testing to assess atrial selectivity of novel antiarrhythmic drugs. PMID:25700171

  4. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  5. Mutations in a conserved region of RNA polymerase II influence the accuracy of mRNA start site selection.

    PubMed Central

    Hekmatpanah, D S; Young, R A

    1991-01-01

    A sensitive phenotypic assay has been used to identify mutations affecting transcription initiation in the genes encoding the two large subunits of Saccharomyces cerevisiae RNA polymerase II (RPB1 and RPB2). The rpb1 and rpb2 mutations alter the ratio of transcripts initiated at two adjacent start sites of a delta-insertion promoter. Of a large number of rpb1 and rpb2 mutations screened, only a few affect transcription initiation patterns at delta-insertion promoters, and these mutations are in close proximity to each other within both RPB1 and RPB2. The two rpb1 mutations alter amino acid residues within homology block G, a region conserved in the large subunits of all RNA polymerases. The three strong rpb2 mutations alter adjacent amino acids. At a wild-type promoter, the rpb1 mutations affect the accuracy of mRNA start site selection by producing a small but detectable increase in the 5'-end heterogeneity of transcripts. These RNA polymerase II mutations implicate specific portions of the enzyme in aspects of transcription initiation. Images PMID:1922077

  6. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling.

    PubMed

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z; Shamma, Shihab A; Babadi, Behtash

    2016-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.

  7. Robust prediction of B-factor profile from sequence using two-stage SVR based on random forest feature selection.

    PubMed

    Pan, Xiao-Yong; Shen, Hong-Bin

    2009-01-01

    B-factor is highly correlated with protein internal motion, which is used to measure the uncertainty in the position of an atom within a crystal structure. Although the rapid progress of structural biology in recent years makes more accurate protein structures available than ever, with the avalanche of new protein sequences emerging during the post-genomic Era, the gap between the known protein sequences and the known protein structures becomes wider and wider. It is urgent to develop automated methods to predict B-factor profile from the amino acid sequences directly, so as to be able to timely utilize them for basic research. In this article, we propose a novel approach, called PredBF, to predict the real value of B-factor. We firstly extract both global and local features from the protein sequences as well as their evolution information, then the random forests feature selection is applied to rank their importance and the most important features are inputted to a two-stage support vector regression (SVR) for prediction, where the initial predicted outputs from the 1(st) SVR are further inputted to the 2nd layer SVR for final refinement. Our results have revealed that a systematic analysis of the importance of different features makes us have deep insights into the different contributions of features and is very necessary for developing effective B-factor prediction tools. The two-layer SVR prediction model designed in this study further enhanced the robustness of predicting the B-factor profile. As a web server, PredBF is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/PredBF for academic use.

  8. Clock accuracy and precision evolve as a consequence of selection for adult emergence in a narrow window of time in fruit flies Drosophila melanogaster.

    PubMed

    Kannan, Nisha N; Vaze, Koustubh M; Sharma, Vijay Kumar

    2012-10-15

    Although circadian clocks are believed to have evolved under the action of periodic selection pressures (selection on phasing) present in the geophysical environment, there is very little rigorous and systematic empirical evidence to support this. In the present study, we examined the effect of selection for adult emergence in a narrow window of time on the circadian rhythms of fruit flies Drosophila melanogaster. Selection was imposed in every generation by choosing flies that emerged during a 1 h window of time close to the emergence peak of baseline/control flies under 12 h:12 h light:dark cycles. To study the effect of selection on circadian clocks we estimated several quantifiable features that reflect inter- and intra-individual variance in adult emergence and locomotor activity rhythms. The results showed that with increasing generations, incidence of adult emergence and activity of adult flies during the 1 h selection window increased gradually in the selected populations. Flies from the selected populations were more homogenous in their clock period, were more coherent in their phase of entrainment, and displayed enhanced accuracy and precision in their emergence and activity rhythms compared with controls. These results thus suggest that circadian clocks in D. melanogaster evolve enhanced accuracy and precision when subjected to selection for emergence in a narrow window of time.

  9. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  10. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  11. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  12. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  13. Robustness of thermal error compensation model of CNC machine tool

    NASA Astrophysics Data System (ADS)

    Lang, Xianli; Miao, Enming; Gong, Yayun; Niu, Pengcheng; Xu, Zhishang

    2013-01-01

    Thermal error is the major factor in restricting the accuracy of CNC machining. The modeling accuracy is the key of thermal error compensation which can achieve precision machining of CNC machine tool. The traditional thermal error compensation models mostly focus on the fitting accuracy without considering the robustness of the models, it makes the research results into practice is difficult. In this paper, the experiment of model robustness is done in different spinde speeds of leaderway V-450 machine tool. Combining fuzzy clustering and grey relevance selects temperature-sensitive points of thermal error. Using multiple linear regression model (MLR) and distributed lag model (DL) establishes model of the multi-batch experimental data and then gives robustness analysis, demonstrates the difference between fitting precision and prediction precision in engineering application, and provides a reference method to choose thermal error compensation model of CNC machine tool in the practical engineering application.

  14. Accuracy and cut-off point selection in three-class classification problems using a generalization of the Youden index.

    PubMed

    Nakas, Christos T; Alonzo, Todd A; Yiannoutsos, Constantin T

    2010-12-10

    We study properties of the index J(3), defined as the accuracy, or the maximum correct classification, for a given three-class classification problem. Specifically, using J(3) one can assess the discrimination between the three distributions and obtain an optimal pair of cut-off points c(1)

  15. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    PubMed

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299

  16. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    PubMed

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions.

  17. An Analysis of the Selected Materials Used in Step Measurements During Pre-Fits of Thermal Protection System Tiles and the Accuracy of Measurements Made Using These Selected Materials

    NASA Technical Reports Server (NTRS)

    Kranz, David William

    2010-01-01

    The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.

  18. Persistent human cardiac Na+ currents in stably transfected mammalian cells: Robust expression and distinct open-channel selectivity among Class 1 antiarrhythmics.

    PubMed

    Wang, Ging Kuo; Russell, Gabriella; Wang, Sho-Ya

    2013-01-01

    Miniature persistent late Na(+) currents in cardiomyocytes have been linked to arrhythmias and sudden death. The goals of this study are to establish a stable cell line expressing robust persistent cardiac Na(+) currents and to test Class 1 antiarrhythmic drugs for selective action against resting and open states. After transient transfection of an inactivation-deficient human cardiac Na(+) channel clone (hNav1.5-CW with L409C/A410W double mutations), transfected mammalian HEK293 cells were treated with 1 mg/ml G-418. Individual G-418-resistant colonies were isolated using glass cylinders. One colony with high expression of persistent Na(+) currents was subjected to a second colony selection. Cells from this colony remained stable in expressing robust peak Na(+) currents of 996 ± 173 pA/pF at +50 mV (n = 20). Persistent late Na(+) currents in these cells were clearly visible during a 4-second depolarizing pulse albeit decayed slowly. This slow decay is likely due to slow inactivation of Na(+) channels and could be largely eliminated by 5 μM batrachotoxin. Peak cardiac hNav1.5-CW Na(+) currents were blocked by tetrodotoxin with an IC(50) value of 2.27 ± 0.08 μM (n = 6). At clinic relevant concentrations, Class 1 antiarrhythmics are much more selective in blocking persistent late Na(+) currents than their peak counterparts, with a selectivity ratio ranging from 80.6 (flecainide) to 3 (disopyramide). We conclude that (1) Class 1 antiarrhythmics differ widely in their resting- vs. open-channel selectivity, and (2) stably transfected HEK293 cells expressing large persistent hNav1.5-CW Na(+) currents are suitable for studying as well as screening potent open-channel blockers. PMID:23695971

  19. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  20. Accuracy of genomic prediction for BCWD resistance in rainbow trout using different genotyping platforms and genomic selection models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we aimed to (1) predict genomic estimated breeding value (GEBV) for bacterial cold water disease (BCWD) resistance by genotyping training (n=583) and validation samples (n=53) with two genotyping platforms (24K RAD-SNP and 49K SNP) and using different genomic selection (GS) models (Ba...

  1. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  2. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.

  3. Prospects of Genomic Prediction in the USDA Soybean Germplasm Collection: Historical Data Creates Robust Models for Enhancing Selection of Accessions

    PubMed Central

    Jarquin, Diego; Specht, James; Lorenz, Aaron

    2016-01-01

    The identification and mobilization of useful genetic variation from germplasm banks for use in breeding programs is critical for future genetic gain and protection against crop pests. Plummeting costs of next-generation sequencing and genotyping is revolutionizing the way in which researchers and breeders interface with plant germplasm collections. An example of this is the high density genotyping of the entire USDA Soybean Germplasm Collection. We assessed the usefulness of 50K single nucleotide polymorphism data collected on 18,480 domesticated soybean (Glycine max) accessions and vast historical phenotypic data for developing genomic prediction models for protein, oil, and yield. Resulting genomic prediction models explained an appreciable amount of the variation in accession performance in independent validation trials, with correlations between predicted and observed reaching up to 0.92 for oil and protein and 0.79 for yield. The optimization of training set design was explored using a series of cross-validation schemes. It was found that the target population and environment need to be well represented in the training set. Second, genomic prediction training sets appear to be robust to the presence of data from diverse geographical locations and genetic clusters. This finding, however, depends on the influence of shattering and lodging, and may be specific to soybean with its presence of maturity groups. The distribution of 7608 nonphenotyped accessions was examined through the application of genomic prediction models. The distribution of predictions of phenotyped accessions was representative of the distribution of predictions for nonphenotyped accessions, with no nonphenotyped accessions being predicted to fall far outside the range of predictions of phenotyped accessions. PMID:27247288

  4. Variable selection procedures before partial least squares regression enhance the accuracy of milk fatty acid composition predicted by mid-infrared spectroscopy.

    PubMed

    Gottardo, P; Penasa, M; Lopez-Villalobos, N; De Marchi, M

    2016-10-01

    Mid-infrared spectroscopy is a high-throughput technique that allows the prediction of milk quality traits on a large-scale. The accuracy of prediction achievable using partial least squares (PLS) regression is usually high for fatty acids (FA) that are more abundant in milk, whereas it decreases for FA that are present in low concentrations. Two variable selection methods, uninformative variable elimination or a genetic algorithm combined with PLS regression, were used in the present study to investigate their effect on the accuracy of prediction equations for milk FA profile expressed either as a concentration on total identified FA or a concentration in milk. For FA expressed on total identified FA, the coefficient of determination of cross-validation from PLS alone was low (0.25) for the prediction of polyunsaturated FA and medium (0.70) for saturated FA. The coefficient of determination increased to 0.54 and 0.95 for polyunsaturated and saturated FA, respectively, when FA were expressed on a milk basis and using PLS alone. Both algorithms before PLS regression improved the accuracy of prediction for FA, especially for FA that are usually difficult to predict; for example, the improvement with respect to the PLS regression ranged from 9 to 80%. In general, FA were better predicted when their concentrations were expressed on a milk basis. These results might favor the use of prediction equations in the dairy industry for genetic purposes and payment system. PMID:27522434

  5. Assessment of exercise-induced minor muscle lesions: the accuracy of Cyriax's diagnosis by selective tension paradigm.

    PubMed

    Franklin, M E; Conner-Kerr, T; Chamness, M; Chenier, T C; Kelly, R R; Hodge, T

    1996-09-01

    The Cyriax selective tension assessment paradigm is commonly used by clinicians for the diagnosis of soft tissue lesions; however, studies have not demonstrated that it is a valid method. The purpose of this study was to examine the construct validity of the active motion, passive motion, resisted movement, and palpation components of the Cyriax selective tension diagnosis paradigm in subjects with an exercise-induced minor hamstring muscle lesion. Nine female subjects with a mean age of 23.6 years (SD = 4.7) and a mass of 57.3 kg (SD = 10.7) performed two sets of 20 maximal eccentric isokinetic knee flexor contractions designed to induce a minor muscle lesion of the hamstrings. Active range of motion, passive range of motion, knee extension end-feel pain relative to resistance sequence, knee flexor isometric strength, pain perception during knee flexor resisted movement testing, and palpation pain of the hamstrings were assessed at 0, 5, 2, 12, 24, 48, and 72 hours postexercise and compared with Cyriax's hypothesized selective tension paradigm results. Consistent with Cyriax's paradigm, passive range of motion remained unchanged, and perceived pain of the hamstrings increased with resistance testing at 12, 24, 48, and 72 hours postexercise when compared with baseline. In addition, palpation pain of the hamstrings was significantly elevated at 48 and 72 hours after exercise (p < 0.05). In contrast of Cyriax's paradigm, active range of motion was significantly reduced over time (p < 0.05), with the least amount of motion compared to baseline (85%) occurring at 48 hours postexercise. Further, resisted movement testing found significant knee flexor isometric strength reductions over time (p < 0.05), with the greatest reductions (33%) occurring at 48 hours postexercise. According to Cyriax, when a minor muscle lesion is tested, it should be strong and painful; however, none of the postexercise time frames exhibited results that were strong and painful. This study

  6. Identification of selective inhibitors of RET and comparison with current clinical candidates through development and validation of a robust screening cascade

    PubMed Central

    Watson, Amanda J.; Hopkins, Gemma V.; Hitchin, Samantha; Begum, Habiba; Jones, Stuart; Jordan, Allan; Holt, Sarah; March, H. Nikki; Newton, Rebecca; Small, Helen; Stowell, Alex; Waddell, Ian D.; Waszkowycz, Bohdan; Ogilvie, Donald J.

    2016-01-01

    RET (REarranged during Transfection) is a receptor tyrosine kinase, which plays pivotal roles in regulating cell survival, differentiation, proliferation, migration and chemotaxis. Activation of RET is a mechanism of oncogenesis in medullary thyroid carcinomas where both germline and sporadic activating somatic mutations are prevalent. At present, there are no known specific RET inhibitors in clinical development, although many potent inhibitors of RET have been opportunistically identified through selectivity profiling of compounds initially designed to target other tyrosine kinases. Vandetanib and cabozantinib, both multi-kinase inhibitors with RET activity, are approved for use in medullary thyroid carcinoma, but additional pharmacological activities, most notably inhibition of vascular endothelial growth factor - VEGFR2 (KDR), lead to dose-limiting toxicity. The recent identification of RET fusions present in ~1% of lung adenocarcinoma patients has renewed interest in the identification and development of more selective RET inhibitors lacking the toxicities associated with the current treatments. In an earlier publication [Newton et al, 2016; 1] we reported the discovery of a series of 2-substituted phenol quinazolines as potent and selective RET kinase inhibitors. Here we describe the development of the robust screening cascade which allowed the identification and advancement of this chemical series.  Furthermore we have profiled a panel of RET-active clinical compounds both to validate the cascade and to confirm that none display a RET-selective target profile. PMID:27429741

  7. The robustness of Hamilton's rule with inbreeding and dominance: kin selection and fixation probabilities under partial sib mating.

    PubMed

    Roze, Denis; Rousset, François

    2004-08-01

    Assessing the validity of Hamilton's rule when there is both inbreeding and dominance remains difficult. In this article, we provide a general method based on the direct fitness formalism to address this question. We then apply it to the question of the evolution of altruism among diploid full sibs and among haplodiploid sisters under inbreeding resulting from partial sib mating. In both cases, we find that the allele coding for altruism always increases in frequency if a condition of the form rb>c holds, where r depends on the rate of sib mating alpha but not on the frequency of the allele, its phenotypic effects, or the dominance of these effects. In both examples, we derive expressions for the probability of fixation of an allele coding for altruism; comparing these expressions with simulation results allows us to test various approximations often made in kin selection models (weak selection, large population size, large fecundity). Increasing alpha increases the probability of fixation of recessive altruistic alleles (h<1/2), while it can increase or decrease the probability of fixation of dominant altruistic alleles (h>1/2). PMID:15278845

  8. A Robust Highly Interpenetrated Metal−Organic Framework Constructed from Pentanuclear Clusters for Selective Sorption of Gas Molecules

    SciTech Connect

    Zhang, Zhangjing; Xiang, Shengchang; Chen, Yu-Sheng; Ma, Shengqian; Lee, Yongwoo; Phely-Bobin, Thomas; Chen, Banglin

    2010-10-22

    A three-dimensional microporous metal-organic framework, Zn{sub 5}(BTA){sub 6}(TDA){sub 2} {center_dot} 15DMF {center_dot} 8H{sub 2}O (1; HBTA = 1,2,3-benzenetriazole; H{sub 2}TDA = thiophene-2,5-dicarboxylic acid), comprising pentanuclear [Zn{sub 5}] cluster units, was obtained through an one-pot solvothermal reaction of Zn(NO{sub 3}){sub 2}, 1,2,3-benzenetriazole, and thiophene-2,5-dicarboxylate. The activated 1 displays type-I N{sub 2} gas sorption behavior with a Langmuir surface area of 607 m{sup 2} g{sup -1} and exhibits interesting selective gas adsorption for C{sub 2}H{sub 2}/CH{sub 4} and CO{sub 2}/CH{sub 4}.

  9. A robust microporous metal-organic framework as a highly selective and sensitive, instantaneous and colorimetric sensor for Eu³⁺ ions.

    PubMed

    Gao, Yanfei; Zhang, Xueqiong; Sun, Wei; Liu, Zhiliang

    2015-01-28

    An extremely thermostable magnesium metal-organic framework (Mg-MOF) is reported for use as a highly selective and sensitive, instantaneous and colorimetric sensor for Eu(3+) ions. There has been extensive interest in the recognition and sensing of ions because of their important roles in biological and environmental systems. However, only a few of these systems have been explored for specific rare earth ion detection. A robust microporous Mg-MOF for the recognition and sensing of Eu(3+) ions with high selectivity at low concentrations in aqueous solutions has been synthesized. This stable metal-organic framework (MOF) contains nanoscale holes and non-coordinating nitrogen atoms inside the walls of the holes, which makes it a potential host for foreign metal ions. Based on the energy level matching and efficient energy transfer between the host and the guest, the Mg-MOF sensor is both highly selective and sensitive as well as instantaneous; thus, it is a promising approach for the development of luminescent probing materials with unprecedented applications and its use as an Eu(3+) ion sensor. PMID:25478996

  10. Compact and phase-error-robust multilayered AWG-based wavelength selective switch driven by a single LCOS.

    PubMed

    Sorimoto, Keisuke; Tanizawa, Ken; Uetsuka, Hisato; Kawashima, Hitoshi; Mori, Masahiko; Hasama, Toshifumi; Ishikawa, Hiroshi; Tsuda, Hiroyuki

    2013-07-15

    A novel liquid crystal on silicon (LCOS)-based wavelength selective switch (WSS) is proposed, fabricated, and demonstrated. It employs a multilayered arrayed waveguide grating (AWG) as a wavelength multiplex/demultiplexer. The LCOS deflects spectrally decomposed beams channel by channel and switches them to desired waveguide layers of the multilayered AWG. In order to obtain the multilayered AWG with high yield, phase errors of the AWG is externally compensated for by an additional phase modulation with the LCOS. This additional phase modulation is applied to the equivalent image of the facet of the AWG, which is projected by a relay lens. In our previously-reported WSS configuration, somewhat large footprint and increased cost were the drawbacks, since two LCOSs were required: one LCOS was driven for the inter-port switching operation, and the other was for the phase-error compensation. In the newly proposed configuration, on the other hand, both switching and compensation operations are performed using a single LCOS. This reduction of the component count is realized by introducing the folded configuration with a reflector. The volume of the WSS optics is 80 × 100 × 60 mm3, which is approximately 40% smaller than the previous configuration. The polarization-dependent loss and inter-channel crosstalk are less than 1.5 dB and -21.0 dB, respectively. An error-free transmission of 40-Gbit/s NRZ-OOK signal through the WSS is successfully demonstrated.

  11. Compact and phase-error-robust multilayered AWG-based wavelength selective switch driven by a single LCOS.

    PubMed

    Sorimoto, Keisuke; Tanizawa, Ken; Uetsuka, Hisato; Kawashima, Hitoshi; Mori, Masahiko; Hasama, Toshifumi; Ishikawa, Hiroshi; Tsuda, Hiroyuki

    2013-07-15

    A novel liquid crystal on silicon (LCOS)-based wavelength selective switch (WSS) is proposed, fabricated, and demonstrated. It employs a multilayered arrayed waveguide grating (AWG) as a wavelength multiplex/demultiplexer. The LCOS deflects spectrally decomposed beams channel by channel and switches them to desired waveguide layers of the multilayered AWG. In order to obtain the multilayered AWG with high yield, phase errors of the AWG is externally compensated for by an additional phase modulation with the LCOS. This additional phase modulation is applied to the equivalent image of the facet of the AWG, which is projected by a relay lens. In our previously-reported WSS configuration, somewhat large footprint and increased cost were the drawbacks, since two LCOSs were required: one LCOS was driven for the inter-port switching operation, and the other was for the phase-error compensation. In the newly proposed configuration, on the other hand, both switching and compensation operations are performed using a single LCOS. This reduction of the component count is realized by introducing the folded configuration with a reflector. The volume of the WSS optics is 80 × 100 × 60 mm3, which is approximately 40% smaller than the previous configuration. The polarization-dependent loss and inter-channel crosstalk are less than 1.5 dB and -21.0 dB, respectively. An error-free transmission of 40-Gbit/s NRZ-OOK signal through the WSS is successfully demonstrated. PMID:23938561

  12. Influence of Raw Image Preprocessing and Other Selected Processes on Accuracy of Close-Range Photogrammetric Systems According to Vdi 2634

    NASA Astrophysics Data System (ADS)

    Reznicek, J.; Luhmann, T.; Jepping, C.

    2016-06-01

    This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements) by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars) with known lengths (typical uncertainty = 5 μm at 2 sigma). The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  13. Selective logging in tropical forests decreases the robustness of liana-tree interaction networks to the loss of host tree species.

    PubMed

    Magrach, Ainhoa; Senior, Rebecca A; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F; Santamaria, Luis; Edwards, David P

    2016-03-16

    Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the 'health' and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana-tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of 'extinction debt'. PMID:26936241

  14. Selective logging in tropical forests decreases the robustness of liana-tree interaction networks to the loss of host tree species.

    PubMed

    Magrach, Ainhoa; Senior, Rebecca A; Rogers, Andrew; Nurdin, Deddy; Benedick, Suzan; Laurance, William F; Santamaria, Luis; Edwards, David P

    2016-03-16

    Selective logging is one of the major drivers of tropical forest degradation, causing important shifts in species composition. Whether such changes modify interactions between species and the networks in which they are embedded remain fundamental questions to assess the 'health' and ecosystem functionality of logged forests. We focus on interactions between lianas and their tree hosts within primary and selectively logged forests in the biodiversity hotspot of Malaysian Borneo. We found that lianas were more abundant, had higher species richness, and different species compositions in logged than in primary forests. Logged forests showed heavier liana loads disparately affecting slow-growing tree species, which could exacerbate the loss of timber value and carbon storage already associated with logging. Moreover, simulation scenarios of host tree local species loss indicated that logging might decrease the robustness of liana-tree interaction networks if heavily infested trees (i.e. the most connected ones) were more likely to disappear. This effect is partially mitigated in the short term by the colonization of host trees by a greater diversity of liana species within logged forests, yet this might not compensate for the loss of preferred tree hosts in the long term. As a consequence, species interaction networks may show a lagged response to disturbance, which may trigger sudden collapses in species richness and ecosystem function in response to additional disturbances, representing a new type of 'extinction debt'.

  15. Identification of GDC-0810 (ARN-810), an Orally Bioavailable Selective Estrogen Receptor Degrader (SERD) that Demonstrates Robust Activity in Tamoxifen-Resistant Breast Cancer Xenografts.

    PubMed

    Lai, Andiliy; Kahraman, Mehmet; Govek, Steven; Nagasawa, Johnny; Bonnefous, Celine; Julien, Jackie; Douglas, Karensa; Sensintaffar, John; Lu, Nhin; Lee, Kyoung-Jin; Aparicio, Anna; Kaufman, Josh; Qian, Jing; Shao, Gang; Prudente, Rene; Moon, Michael J; Joseph, James D; Darimont, Beatrice; Brigham, Daniel; Grillot, Kate; Heyman, Richard; Rix, Peter J; Hager, Jeffrey H; Smith, Nicholas D

    2015-06-25

    Approximately 80% of breast cancers are estrogen receptor alpha (ER-α) positive, and although women typically initially respond well to antihormonal therapies such as tamoxifen and aromatase inhibitors, resistance often emerges. Although a variety of resistance mechanism may be at play in this state, there is evidence that in many cases the ER still plays a central role, including mutations in the ER leading to constitutively active receptor. Fulvestrant is a steroid-based, selective estrogen receptor degrader (SERD) that both antagonizes and degrades ER-α and is active in patients who have progressed on antihormonal agents. However, fulvestrant suffers from poor pharmaceutical properties and must be administered by intramuscular injections that limit the total amount of drug that can be administered and hence lead to the potential for incomplete receptor blockade. We describe the identification and characterization of a series of small-molecule, orally bioavailable SERDs which are potent antagonists and degraders of ER-α and in which the ER-α degrading properties were prospectively optimized. The lead compound 11l (GDC-0810 or ARN-810) demonstrates robust activity in models of tamoxifen-sensitive and tamoxifen-resistant breast cancer, and is currently in clinical trials in women with locally advanced or metastatic estrogen receptor-positive breast cancer.

  16. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  17. Blink detection robust to various facial poses.

    PubMed

    Lee, Won Oh; Lee, Eui Chul; Park, Kang Ryoung

    2010-11-30

    Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses. PMID:20826183

  18. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    PubMed

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99).

  19. The relative accuracy of standard estimators for macrofaunal abundance and species richness derived from selected intertidal transect designs used to sample exposed sandy beaches

    NASA Astrophysics Data System (ADS)

    Schoeman, transect designs used to sample exposed sandy beaches D. S.; Wheeler, M.; Wait, M.

    2003-10-01

    In order to ensure that patterns detected in field samples reflect real ecological processes rather than methodological idiosyncrasies, it is important that researchers attempt to understand the consequences of the sampling and analytical designs that they select. This is especially true for sandy beach ecology, which has lagged somewhat behind ecological studies of other intertidal habitats. This paper investigates the performance of routine estimators of macrofaunal abundance and species richness, which are variables that have been widely used to infer predictable patterns of biodiversity across a gradient of beach types. To do this, a total of six shore-normal strip transects were sampled on three exposed, oceanic sandy beaches in the Eastern Cape, South Africa. These transects comprised contiguous quadrats arranged linearly between the spring high and low water marks. Using simple Monte Carlo simulation techniques, data collected from the strip transects were used to assess the accuracy of parameter estimates from different sampling strategies relative to their true values (macrofaunal abundance ranged 595-1369 individuals transect -1; species richness ranged 12-21 species transect -1). Results indicated that estimates from the various transect methods performed in a similar manner both within beaches and among beaches. Estimates for macrofaunal abundance tended to be negatively biased, especially at levels of sampling effort most commonly reported in the literature, and accuracy decreased with decreasing sampling effort. By the same token, estimates for species richness were always negatively biased and were also characterised by low precision. Furthermore, triplicate transects comprising a sampled area in the region of 4 m 2 (as has been previously recommended) are expected to miss more than 30% of the species that occur on the transect. Surprisingly, for both macrofaunal abundance and species richness, estimates based on data from transects sampling quadrats

  20. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  1. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  2. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  3. Population genetics of translational robustness.

    PubMed

    Wilke, Claus O; Drummond, D Allan

    2006-05-01

    Recent work has shown that expression level is the main predictor of a gene's evolutionary rate and that more highly expressed genes evolve slower. A possible explanation for this observation is selection for proteins that fold properly despite mistranslation, in short selection for translational robustness. Translational robustness leads to the somewhat paradoxical prediction that highly expressed genes are extremely tolerant to missense substitutions but nevertheless evolve very slowly. Here, we study a simple theoretical model of translational robustness that allows us to gain analytic insight into how this paradoxical behavior arises.

  4. The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000

    USGS Publications Warehouse

    Fitzpatrick-Lins, Katherine

    1980-01-01

    Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.

  5. The Role of Selected Lexical Factors on Confrontation Naming Accuracy, Speed, and Fluency in Adults Who Do and Do Not Stutter

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; Ratner, Nan Bernstein

    2007-01-01

    Purpose: The purpose of this study was to investigate whether lexical access in adults who stutter (AWS) differs from that in people who do not stutter. Specifically, the authors examined the role of 3 lexical factors on naming speed, accuracy, and fluency: word frequency, neighborhood density, and neighborhood frequency. If stuttering results…

  6. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems

    NASA Technical Reports Server (NTRS)

    Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  7. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems.

    PubMed

    Li, Q; Zamorano, L; Jiang, Z; Gong, J X; Pandya, A; Perez, R; Diaz, F

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. PMID:10631374

  8. Development and validation of a robust and sensitive assay for the discovery of selective inhibitors for serine/threonine protein phosphatases PP1α (PPP1C) and PP5 (PPP5C).

    PubMed

    Swingle, Mark R; Honkanen, Richard E

    2014-10-01

    Protein phosphatase types 1 α (PP1α/PPP1C) and 5 (PP5/PPP5C) are members of the PPP family of serine/threonine protein phosphatases. PP1 and PP5 share a common catalytic mechanism, and several natural compounds, including okadaic acid, microcystin, and cantharidin, act as strong inhibitors of both enzymes. However, to date there have been no reports of compounds that can selectively inhibit PP1 or PP5, and specific or highly selective inhibitors for either PP1 or PP5 are greatly desired by both the research and pharmaceutical communities. Here we describe the development and optimization of a sensitive and robust (representative PP5C assay data: Z'=0.93; representative PP1Cα assay data: Z'=0.90) fluorescent phosphatase assay that can be used to simultaneously screen chemical libraries and natural product extracts for the presence of catalytic inhibitors of PP1 and PP5. PMID:25383722

  9. Development and validation of a robust and sensitive assay for the discovery of selective inhibitors for serine/threonine protein phosphatases PP1α (PPP1C) and PP5 (PPP5C).

    PubMed

    Swingle, Mark R; Honkanen, Richard E

    2014-10-01

    Protein phosphatase types 1 α (PP1α/PPP1C) and 5 (PP5/PPP5C) are members of the PPP family of serine/threonine protein phosphatases. PP1 and PP5 share a common catalytic mechanism, and several natural compounds, including okadaic acid, microcystin, and cantharidin, act as strong inhibitors of both enzymes. However, to date there have been no reports of compounds that can selectively inhibit PP1 or PP5, and specific or highly selective inhibitors for either PP1 or PP5 are greatly desired by both the research and pharmaceutical communities. Here we describe the development and optimization of a sensitive and robust (representative PP5C assay data: Z'=0.93; representative PP1Cα assay data: Z'=0.90) fluorescent phosphatase assay that can be used to simultaneously screen chemical libraries and natural product extracts for the presence of catalytic inhibitors of PP1 and PP5.

  10. Development and validation of a selective and robust LC-MS/MS method for high-throughput quantifying rizatriptan in small plasma samples: application to a clinical pharmacokinetic study.

    PubMed

    Chen, Yi; Miao, Haijun; Lin, Mei; Fan, Guorong; Hong, Zhanying; Wu, Huiling; Wu, Yutian

    2006-12-01

    An analytical method based on liquid chromatography with positive ion electrospray ionization (ESI) coupled to tandem mass spectrometry detection (LC-MS/MS) was developed for the determination of a potent 5-HT(1B/1D) receptor agonist, rizatriptan in human plasma using granisetron as the internal standard. The analyte and internal standard were isolated from 100 microL plasma samples by liquid-liquid extraction (LLE) and chromatographed on a Lichrospher C18 column (4.6mm x 50mm, 5 microm) with a mobile phase consisting of acetonitrile-10mM aqueous ammonium acetate-acetic acid (50:50:0.5, v/v/v) pumped at 1.0 mL/min. The method had a chromatographic total run time of 2 min. A Varian 1200 L electrospray tandem mass spectrometer equipped with an electrospray ionization source was operated in selected reaction monitoring (SRM) mode with the precursor-to-product ion transitions m/z 270-->201 (rizatriptan) and 313.4-->138 (granisetron) used for quantitation. The assay was validated over the concentration range of 0.05-50 ng/mL and was found to have acceptable accuracy, precision, linearity, and selectivity. The mean extraction recovery from spiked plasma samples was above 98%. The intra-day accuracy of the assay was within 12% of nominal and intra-day precision was better than 13% C.V. Following a 10mg dose of the compound administered to human subjects, mean concentrations of rizatriptan ranged from 0.2 to 70.6 ng/mL in plasma samples collected up to 24h after dosing. Inter-day accuracy and precision results for quality control samples run over a 5-day period alongside clinical samples showed mean accuracies of within 12% of nominal and precision better than 9.5% C.V. PMID:16899417

  11. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  12. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types.

  13. Constructing better classifier ensemble based on weighted accuracy and diversity measure.

    PubMed

    Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  14. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  15. Fast robust correlation.

    PubMed

    Fitch, Alistair J; Kadyrov, Alexander; Christmas, William J; Kittler, Josef

    2005-08-01

    A new, fast, statistically robust, exhaustive, translational image-matching technique is presented: fast robust correlation. Existing methods are either slow or non-robust, or rely on optimization. Fast robust correlation works by expressing a robust matching surface as a series of correlations. Speed is obtained by computing correlations in the frequency domain. Computational cost is analyzed and the method is shown to be fast. Speed is comparable to conventional correlation and, for large images, thousands of times faster than direct robust matching. Three experiments demonstrate the advantage of the technique over standard correlation.

  16. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  17. The wisdom of select crowds.

    PubMed

    Mannes, Albert E; Soll, Jack B; Larrick, Richard P

    2014-08-01

    Social psychologists have long recognized the power of statisticized groups. When individual judgments about some fact (e.g., the unemployment rate for next quarter) are averaged together, the average opinion is typically more accurate than most of the individual estimates, a pattern often referred to as the wisdom of crowds. The accuracy of averaging also often exceeds that of the individual perceived as most knowledgeable in the group. However, neither averaging nor relying on a single judge is a robust strategy; each performs well in some settings and poorly in others. As an alternative, we introduce the select-crowd strategy, which ranks judges based on a cue to ability (e.g., the accuracy of several recent judgments) and averages the opinions of the top judges, such as the top 5. Through both simulation and an analysis of 90 archival data sets, we show that select crowds of 5 knowledgeable judges yield very accurate judgments across a wide range of possible settings-the strategy is both accurate and robust. Following this, we examine how people prefer to use information from a crowd. Previous research suggests that people are distrustful of crowds and of mechanical processes such as averaging. We show in 3 experiments that, as expected, people are drawn to experts and dislike crowd averages-but, critically, they view the select-crowd strategy favorably and are willing to use it. The select-crowd strategy is thus accurate, robust, and appealing as a mechanism for helping individuals tap collective wisdom.

  18. MAGE-C2-Specific TCRs Combined with Epigenetic Drug-Enhanced Antigenicity Yield Robust and Tumor-Selective T Cell Responses.

    PubMed

    Kunert, Andre; van Brakel, Mandy; van Steenbergen-Langeveld, Sabine; da Silva, Marvin; Coulie, Pierre G; Lamers, Cor; Sleijfer, Stefan; Debets, Reno

    2016-09-15

    Adoptive T cell therapy has shown significant clinical success for patients with advanced melanoma and other tumors. Further development of T cell therapy requires improved strategies to select effective, yet nonself-reactive, TCRs. In this study, we isolated 10 TCR sequences against four MAGE-C2 (MC2) epitopes from melanoma patients who showed clinical responses following vaccination that were accompanied by significant frequencies of anti-MC2 CD8 T cells in blood and tumor without apparent side effects. We introduced these TCRs into T cells, pretreated tumor cells of different histological origins with the epigenetic drugs azacytidine and valproate, and tested tumor and self-reactivities of these TCRs. Pretreatment of tumor cells upregulated MC2 gene expression and enhanced recognition by T cells. In contrast, a panel of normal cell types did not express MC2 mRNA, and similar pretreatment did not result in recognition by MC2-directed T cells. Interestingly, the expression levels of MC2, but not those of CD80, CD86, or programmed death-ligand 1 or 2, correlated with T cell responsiveness. One of the tested TCRs consistently recognized pretreated MC2(+) cell lines from melanoma, head and neck, bladder, and triple-negative breast cancers but showed no response to MHC-eluted peptides or peptides highly similar to MC2. We conclude that targeting MC2 Ag, combined with epigenetic drug-enhanced antigenicity, allows for significant and tumor-selective T cell responses. PMID:27489285

  19. Assessing the impact of end-member selection on the accuracy of satellite-based spatial variability models for actual evapotranspiration estimation

    NASA Astrophysics Data System (ADS)

    Long, Di; Singh, Vijay P.

    2013-05-01

    This study examines the impact of end-member (i.e., hot and cold extremes) selection on the performance and mechanisms of error propagation in satellite-based spatial variability models for estimating actual evapotranspiration, using the triangle, surface energy balance algorithm for land (SEBAL), and mapping evapotranspiration with high resolution and internalized calibration (METRIC) models. These models were applied to the soil moisture-atmosphere coupling experiment site in central Iowa on two Landsat Thematic Mapper/Enhanced Thematic Mapper Plus acquisition dates in 2002. Evaporative fraction (EF, defined as the ratio of latent heat flux to availability energy) estimates from the three models at field and watershed scales were examined using varying end-members. Results show that the end-members fundamentally determine the magnitudes of EF retrievals at both field and watershed scales. The hot and cold extremes exercise a similar impact on the discrepancy between the EF estimates and the ground-based measurements, i.e., given a hot (cold) extreme, the EF estimates tend to increase with increasing temperature of cold (hot) extreme, and decrease with decreasing temperature of cold (hot) extreme. The coefficient of determination between the EF estimates and the ground-based measurements depends principally on the capability of remotely sensed surface temperature (Ts) to capture EF (i.e., depending on the correlation between Ts and EF measurements), being slightly influenced by the end-members. Varying the end-members does not substantially affect the standard deviation and skewness of the EF frequency distributions from the same model at the watershed scale. However, different models generate markedly different EF frequency distributions due to differing model physics, especially the limiting edges of EF defined in the remotely sensed vegetation fraction (fc) and Ts space. In general, the end-members cannot be properly determined because (1) they do not

  20. Commensurate CO2 Capture, and Shape Selectivity for HCCH over H2CCH2, in Zigzag Channels of a Robust Cu(I)(CN)(L) Metal-Organic Framework.

    PubMed

    Miller, Reece G; Southon, Peter D; Kepert, Cameron J; Brooker, Sally

    2016-06-20

    A novel copper(I) metal-organic framework (MOF), {[Cu(I)2(py-pzpypz)2(μ-CN)2]·MeCN}n (1·MeCN), with an unusual topology is shown to be robust, retaining crystallinity during desolvation to give 1, which has also been structurally characterized [py-pzpypz is 4-(4-pyridyl)-2,5-dipyrazylpyridine)]. Zigzag-shaped channels, which in 1·MeCN were occupied by disordered MeCN molecules, run along the c axis of 1, resulting in a significant solvent-accessible void space (9.6% of the unit cell volume). These tight zigzags, bordered by (Cu(I)CN)n chains, make 1 an ideal candidate for investigations into shape-based selectivity. MOF 1 shows a moderate enthalpy of adsorption for binding CO2 (-32 kJ mol(-1) at moderate loadings), which results in a good selectivity for CO2 over N2 of 4.8:1 under real-world operating conditions of a 15:85 CO2/N2 mixture at 1 bar. Furthermore, 1 was investigated for shape-based selectivity of small hydrocarbons, revealing preferential uptake of linear acetylene gas over ethylene and methane, partially due to kinetic trapping of the guests with larger kinetic diameters. PMID:27258550

  1. Robust expression and secretion of Xylanase1 in Chlamydomonas reinhardtii by fusion to a selection gene and processing with the FMDV 2A peptide.

    PubMed

    Rasala, Beth A; Lee, Philip A; Shen, Zhouxin; Briggs, Steven P; Mendez, Michael; Mayfield, Stephen P

    2012-01-01

    Microalgae have recently received attention as a potential low-cost host for the production of recombinant proteins and novel metabolites. However, a major obstacle to the development of algae as an industrial platform has been the poor expression of heterologous genes from the nuclear genome. Here we describe a nuclear expression strategy using the foot-and-mouth-disease-virus 2A self-cleavage peptide to transcriptionally fuse heterologous gene expression to antibiotic resistance in Chlamydomonas reinhardtii. We demonstrate that strains transformed with ble-2A-GFP are zeocin-resistant and accumulate high levels of GFP that is properly 'cleaved' at the FMDV 2A peptide resulting in monomeric, cytosolic GFP that is easily detectable by in-gel fluorescence analysis or fluorescent microscopy. Furthermore, we used our ble2A nuclear expression vector to engineer the heterologous expression of the industrial enzyme, xylanase. We demonstrate that linking xyn1 expression to ble2A expression on the same open reading frame led to a dramatic (~100-fold) increase in xylanase activity in cells lysates compared to the unlinked construct. Finally, by inserting an endogenous secretion signal between the ble2A and xyn1 coding regions, we were able to target monomeric xylanase for secretion. The novel microalgae nuclear expression strategy described here enables the selection of transgenic lines that are efficiently expressing the heterologous gene-of-interest and should prove valuable for basic research as well as algal biotechnology. PMID:22937037

  2. Robustness: confronting lessons from physics and biology.

    PubMed

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems. PMID:18823391

  3. An ant-plant by-product mutualism is robust to selective logging of rain forest and conversion to oil palm plantation.

    PubMed

    Fayle, Tom M; Edwards, David P; Foster, William A; Yusah, Kalsum M; Turner, Edgar C

    2015-06-01

    Anthropogenic disturbance and the spread of non-native species disrupt natural communities, but also create novel interactions between species. By-product mutualisms, in which benefits accrue as side effects of partner behaviour or morphology, are often non-specific and hence may persist in novel ecosystems. We tested this hypothesis for a two-way by-product mutualism between epiphytic ferns and their ant inhabitants in the Bornean rain forest, in which ants gain housing in root-masses while ferns gain protection from herbivores. Specifically, we assessed how the specificity (overlap between fern and ground-dwelling ants) and the benefits of this interaction are altered by selective logging and conversion to an oil palm plantation habitat. We found that despite the high turnover of ant species, ant protection against herbivores persisted in modified habitats. However, in ferns growing in the oil palm plantation, ant occupancy, abundance and species richness declined, potentially due to the harsher microclimate. The specificity of the fern-ant interactions was also lower in the oil palm plantation habitat than in the forest habitats. We found no correlations between colony size and fern size in modified habitats, and hence no evidence for partner fidelity feedbacks, in which ants are incentivised to protect fern hosts. Per species, non-native ant species in the oil palm plantation habitat (18 % of occurrences) were as important as native ones in terms of fern protection and contributed to an increase in ant abundance and species richness with fern size. We conclude that this by-product mutualism persists in logged forest and oil palm plantation habitats, with no detectable shift in partner benefits. Such persistence of generalist interactions in novel ecosystems may be important for driving ecosystem functioning. PMID:25575674

  4. An ant-plant by-product mutualism is robust to selective logging of rain forest and conversion to oil palm plantation.

    PubMed

    Fayle, Tom M; Edwards, David P; Foster, William A; Yusah, Kalsum M; Turner, Edgar C

    2015-06-01

    Anthropogenic disturbance and the spread of non-native species disrupt natural communities, but also create novel interactions between species. By-product mutualisms, in which benefits accrue as side effects of partner behaviour or morphology, are often non-specific and hence may persist in novel ecosystems. We tested this hypothesis for a two-way by-product mutualism between epiphytic ferns and their ant inhabitants in the Bornean rain forest, in which ants gain housing in root-masses while ferns gain protection from herbivores. Specifically, we assessed how the specificity (overlap between fern and ground-dwelling ants) and the benefits of this interaction are altered by selective logging and conversion to an oil palm plantation habitat. We found that despite the high turnover of ant species, ant protection against herbivores persisted in modified habitats. However, in ferns growing in the oil palm plantation, ant occupancy, abundance and species richness declined, potentially due to the harsher microclimate. The specificity of the fern-ant interactions was also lower in the oil palm plantation habitat than in the forest habitats. We found no correlations between colony size and fern size in modified habitats, and hence no evidence for partner fidelity feedbacks, in which ants are incentivised to protect fern hosts. Per species, non-native ant species in the oil palm plantation habitat (18 % of occurrences) were as important as native ones in terms of fern protection and contributed to an increase in ant abundance and species richness with fern size. We conclude that this by-product mutualism persists in logged forest and oil palm plantation habitats, with no detectable shift in partner benefits. Such persistence of generalist interactions in novel ecosystems may be important for driving ecosystem functioning.

  5. Knowledge discovery by accuracy maximization.

    PubMed

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-04-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.

  6. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  7. Genomic selection & association mapping in rice: effect of trait genetic architecture, training population composition, marker number & statistical model on accuracy of rice genomic selection in elite, tropical rice breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its ef...

  8. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  9. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  10. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  11. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  12. Contemporary flow meters: an assessment of their accuracy and reliability.

    PubMed

    Christmas, T J; Chapple, C R; Rickards, D; Milroy, E J; Turner-Warwick, R T

    1989-05-01

    The accuracy, reliability and cost effectiveness of 5 currently marketed flow meters have been assessed. The mechanics of each meter is briefly described in relation to its accuracy and robustness. The merits and faults of the meters are discussed and the important features of flow measurements that need to be taken into account when making diagnostic interpretations are emphasised.

  13. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  14. The beta-arrestin pathway-selective type 1A angiotensin receptor (AT1A) agonist [Sar1,Ile4,Ile8]angiotensin II regulates a robust G protein-independent signaling network.

    PubMed

    Kendall, Ryan T; Strungs, Erik G; Rachidi, Saleh M; Lee, Mi-Hye; El-Shewy, Hesham M; Luttrell, Deirdre K; Janech, Michael G; Luttrell, Louis M

    2011-06-01

    The angiotensin II peptide analog [Sar(1),Ile(4),Ile(8)]AngII (SII) is a biased AT(1A) receptor agonist that stimulates receptor phosphorylation, β-arrestin recruitment, receptor internalization, and β-arrestin-dependent ERK1/2 activation without activating heterotrimeric G-proteins. To determine the scope of G-protein-independent AT(1A) receptor signaling, we performed a gel-based phosphoproteomic analysis of AngII and SII-induced signaling in HEK cells stably expressing AT(1A) receptors. A total of 34 differentially phosphorylated proteins were detected, of which 16 were unique to SII and eight to AngII stimulation. MALDI-TOF/TOF mass fingerprinting was employed to identify 24 SII-sensitive phosphoprotein spots, of which three (two peptide inhibitors of protein phosphatase 2A (I1PP2A and I2PP2A) and prostaglandin E synthase 3 (PGES3)) were selected for validation and further study. We found that phosphorylation of I2PP2A was associated with rapid and transient inhibition of a β-arrestin 2-associated pool of protein phosphatase 2A, leading to activation of Akt and increased phosphorylation of glycogen synthase kinase 3β in an arrestin signalsome complex. SII-stimulated PGES3 phosphorylation coincided with an increase in β-arrestin 1-associated PGES3 and an arrestin-dependent increase in cyclooxygenase 1-dependent prostaglandin E(2) synthesis. These findings suggest that AT(1A) receptors regulate a robust G protein-independent signaling network that affects protein phosphorylation and autocrine/paracrine prostaglandin production and that these pathways can be selectively modulated by biased ligands that antagonize G protein activation.

  15. Robust Methods in Qsar

    NASA Astrophysics Data System (ADS)

    Walczak, Beata; Daszykowski, Michał; Stanimirova, Ivana

    A large progress in the development of robust methods as an efficient tool for processing of data contaminated with outlying objects has been made over the last years. Outliers in the QSAR studies are usually the result of an improper calculation of some molecular descriptors and/or experimental error in determining the property to be modelled. They influence greatly any least square model, and therefore the conclusions about the biological activity of a potential component based on such a model are misleading. With the use of robust approaches, one can solve this problem building a robust model describing the data majority well. On the other hand, the proper identification of outliers may pinpoint a new direction of a drug development. The outliers' assessment can exclusively be done with robust methods and these methods are to be described in this chapter

  16. Liquid chromatography-high resolution/ high accuracy (tandem) mass spectrometry-based identification of in vivo generated metabolites of the selective androgen receptor modulator ACP-105 for doping control purposes.

    PubMed

    Thevis, Mario; Thomas, Andreas; Piper, Thomas; Krug, Oliver; Delahaut, Philippe; Schänzer, Wilhelm

    2014-01-01

    Selective androgen receptor modulators (SARMs) represent an emerging class of therapeutics which have been prohibited in sport as anabolic agents according to the regulations of the World Anti-Doping Agency (WADA) since 2008. Within the past three years, numerous adverse analytical findings with SARMs in routine doping control samples have been reported despite missing clinical approval of these substances. Hence, preventive doping research concerning the metabolism and elimination of new therapeutic entities of the class of SARMs are vital for efficient and timely sports drug testing programs as banned compounds are most efficiently screened when viable targets (for example, characteristic metabolites) are identified. In the present study, the metabolism of ACP-105, a novel SARM drug candidate, was studied in vivo in rats. Following oral administration, urine samples were collected over a period of seven days and analyzed for metabolic products by Liquid chromatography-high resolution/high accuracy (tandem) mass spectrometry. Samples were subjected to enzymatic hydrolysis prior to liquid-liquid extraction and a total of seven major phase-I metabolites were detected, three of which were attributed to monohydroxylated and four to bishydroxylated ACP-105. The hydroxylation sites were assigned by means of diagnostic product ions and respective dissociation pathways of the analytes following positive or negative ionization and collisional activation as well as selective chemical derivatization. The identified metabolites were used as target compounds to investigate their traceability in a rat elimination urine samples study and monohydroxylated and bishydroxylated species were detectable for up to four and six days post-administration, respectively.

  17. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  18. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  19. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  20. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  1. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  2. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  3. Robust gene signatures from microarray data using genetic algorithms enriched with biological pathway keywords.

    PubMed

    Luque-Baena, R M; Urda, D; Gonzalo Claros, M; Franco, L; Jerez, J M

    2014-06-01

    Genetic algorithms are widely used in the estimation of expression profiles from microarrays data. However, these techniques are unable to produce stable and robust solutions suitable to use in clinical and biomedical studies. This paper presents a novel two-stage evolutionary strategy for gene feature selection combining the genetic algorithm with biological information extracted from the KEGG database. A comparative study is carried out over public data from three different types of cancer (leukemia, lung cancer and prostate cancer). Even though the analyses only use features having KEGG information, the results demonstrate that this two-stage evolutionary strategy increased the consistency, robustness and accuracy of a blind discrimination among relapsed and healthy individuals. Therefore, this approach could facilitate the definition of gene signatures for the clinical prognosis and diagnostic of cancer diseases in a near future. Additionally, it could also be used for biological knowledge discovery about the studied disease.

  4. Biological Robustness: Paradigms, Mechanisms, and Systems Principles

    PubMed Central

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  5. Biological robustness: paradigms, mechanisms, and systems principles.

    PubMed

    Whitacre, James Michael

    2012-01-01

    Robustness has been studied through the analysis of data sets, simulations, and a variety of experimental techniques that each have their own limitations but together confirm the ubiquity of biological robustness. Recent trends suggest that different types of perturbation (e.g., mutational, environmental) are commonly stabilized by similar mechanisms, and system sensitivities often display a long-tailed distribution with relatively few perturbations representing the majority of sensitivities. Conceptual paradigms from network theory, control theory, complexity science, and natural selection have been used to understand robustness, however each paradigm has a limited scope of applicability and there has been little discussion of the conditions that determine this scope or the relationships between paradigms. Systems properties such as modularity, bow-tie architectures, degeneracy, and other topological features are often positively associated with robust traits, however common underlying mechanisms are rarely mentioned. For instance, many system properties support robustness through functional redundancy or through response diversity with responses regulated by competitive exclusion and cooperative facilitation. Moreover, few studies compare and contrast alternative strategies for achieving robustness such as homeostasis, adaptive plasticity, environment shaping, and environment tracking. These strategies share similarities in their utilization of adaptive and self-organization processes that are not well appreciated yet might be suggestive of reusable building blocks for generating robust behavior. PMID:22593762

  6. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  7. Robust control design for aerospace applications

    NASA Technical Reports Server (NTRS)

    Yedavalli, Rama K.

    1989-01-01

    Time-domain control design for stability robustness of linear systems with structured uncertainty is addressed. Upper bounds on the linear perturbation of an asymptotically stable linear system are obtained, making it possible to maintain stability by using the structural information of the uncertainty. A quantitative measure called the stability robustness index is introduced and used to design controllers for robust stability. The proposed state feedback control design algorithm can be used, for a given set of perturbations, to select the range of control effort for which the system is stability-robust. Conversely it can be used, for a given control effort, to determine the size of the tolerable perturbation. The algorithm is illustrated with examples from aircraft control and large-space-structure control problems.

  8. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  9. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  10. Uncertainty driven probabilistic voxel selection for image registration.

    PubMed

    Oreshkin, Boris N; Arbel, Tal

    2013-10-01

    This paper presents a novel probabilistic voxel selection strategy for medical image registration in time-sensitive contexts, where the goal is aggressive voxel sampling (e.g., using less than 1% of the total number) while maintaining registration accuracy and low failure rate. We develop a Bayesian framework whereby, first, a voxel sampling probability field (VSPF) is built based on the uncertainty on the transformation parameters. We then describe a practical, multi-scale registration algorithm, where, at each optimization iteration, different voxel subsets are sampled based on the VSPF. The approach maximizes accuracy without committing to a particular fixed subset of voxels. The probabilistic sampling scheme developed is shown to manage the tradeoff between the robustness of traditional random voxel selection (by permitting more exploration) and the accuracy of fixed voxel selection (by permitting a greater proportion of informative voxels).

  11. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  12. Dealing with Outliers: Robust, Resistant Regression

    ERIC Educational Resources Information Center

    Glasser, Leslie

    2007-01-01

    Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…

  13. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  14. [Effect of algorithms for calibration set selection on quantitatively determining asiaticoside content in Centella total glucosides by near infrared spectroscopy].

    PubMed

    Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang

    2014-12-01

    The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.

  15. [Effect of algorithms for calibration set selection on quantitatively determining asiaticoside content in Centella total glucosides by near infrared spectroscopy].

    PubMed

    Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang

    2014-12-01

    The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine. PMID:25881421

  16. Robust acoustic object detection

    NASA Astrophysics Data System (ADS)

    Amit, Yali; Koloydenko, Alexey; Niyogi, Partha

    2005-10-01

    We consider a novel approach to the problem of detecting phonological objects like phonemes, syllables, or words, directly from the speech signal. We begin by defining local features in the time-frequency plane with built in robustness to intensity variations and time warping. Global templates of phonological objects correspond to the coincidence in time and frequency of patterns of the local features. These global templates are constructed by using the statistics of the local features in a principled way. The templates have clear phonetic interpretability, are easily adaptable, have built in invariances, and display considerable robustness in the face of additive noise and clutter from competing speakers. We provide a detailed evaluation of the performance of some diphone detectors and a word detector based on this approach. We also perform some phonetic classification experiments based on the edge-based features suggested here.

  17. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  18. Robust reinforcement learning.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2005-02-01

    This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.

  19. Competition improves robustness against loss of information.

    PubMed

    Kermani Kolankeh, Arash; Teichmann, Michael; Hamker, Fred H

    2015-01-01

    A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

  20. A robust multi-frame image blind deconvolution algorithm via total variation

    NASA Astrophysics Data System (ADS)

    Zhou, Haiyang; Xia, Guo; Liu, Qianshun; Yu, Feihong

    2015-10-01

    Image blind deconvolution is a more practical inverse problem in modern imaging sciences including consumer photography, astronomical imaging, medical imaging, and microscopy imaging. Among all of the latest blind deconvolution algorithms, the total variation based method provides privilege for large blur kernel. However, the computation cost is heavy and it does not handle the estimated kernel error properly. Otherwise, the using of the whole image to estimate the blur kernel is inaccurate because of that the insufficient edges information will hazard the accuracy of estimation. Here, we proposed a robust multi-frame images blind deconvolution algorithm to handle this complicated imaging model and applying it to the engineering community. In our proposed method, we induced the patch and kernel selection scheme to selecting the effective patch to estimate the kernel without using the whole image; then an total variation based kernel estimation algorithm was proposed to estimate the kernel; after the estimation of blur kernels, a new kernel refinement scheme was applied to refine the pre-estimated multi-frame estimated kernels; finally, a robust non-blind deconvolution method was implemented to recover the final latent sharp image with the refined blur kernel. Objective experiments on both synthesized and real images evaluate the efficiency and robustness of our algorithm and illustrate that this approach not only have rapid convergence but also can effectively recover high quality latent image from multi-blurry images.

  1. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  2. Robust Systems Test Framework

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  3. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  4. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  5. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  6. Robust telescope scheduling

    NASA Technical Reports Server (NTRS)

    Swanson, Keith; Bresina, John; Drummond, Mark

    1994-01-01

    This paper presents a technique for building robust telescope schedules that tend not to break. The technique is called Just-In-Case (JIC) scheduling and it implements the common sense idea of being prepared for likely errors, just in case they should occur. The JIC algorithm analyzes a given schedule, determines where it is likely to break, reinvokes a scheduler to generate a contingent schedule for each highly probable break case, and produces a 'multiply contingent' schedule. The technique was developed for an automatic telescope scheduling problem, and the paper presents empirical results showing that Just-In-Case scheduling performs extremely well for this problem.

  7. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  8. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  9. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  10. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  11. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  12. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R

  13. Neutral evolution of robustness in Drosophila microRNA precursors.

    PubMed

    Price, Nicholas; Cartwright, Reed A; Sabath, Niv; Graur, Dan; Azevedo, Ricardo B R

    2011-07-01

    Mutational robustness describes the extent to which a phenotype remains unchanged in the face of mutations. Theory predicts that the strength of direct selection for mutational robustness is at most the magnitude of the rate of deleterious mutation. As far as nucleic acid sequences are concerned, only long sequences in organisms with high deleterious mutation rates and large population sizes are expected to evolve mutational robustness. Surprisingly, recent studies have concluded that molecules that meet none of these conditions--the microRNA precursors (pre-miRNAs) of multicellular eukaryotes--show signs of selection for mutational and/or environmental robustness. To resolve the apparent disagreement between theory and these studies, we have reconstructed the evolutionary history of Drosophila pre-miRNAs and compared the robustness of each sequence to that of its reconstructed ancestor. In addition, we "replayed the tape" of pre-miRNA evolution via simulation under different evolutionary assumptions and compared these alternative histories with the actual one. We found that Drosophila pre-miRNAs have evolved under strong purifying selection against changes in secondary structure. Contrary to earlier claims, there is no evidence that these RNAs have been shaped by either direct or congruent selection for any kind of robustness. Instead, the high robustness of Drosophila pre-miRNAs appears to be mostly intrinsic and likely a consequence of selection for functional structures.

  14. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  15. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  16. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  17. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  18. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  19. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  20. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  1. Parallax-Robust Surveillance Video Stitching

    PubMed Central

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  2. Robust Automatic Pectoral Muscle Segmentation from Mammograms Using Texture Gradient and Euclidean Distance Regression.

    PubMed

    Bora, Vibha Bafna; Kothari, Ashwin G; Keskar, Avinash G

    2016-02-01

    In computer-aided diagnosis (CAD) of mediolateral oblique (MLO) view of mammogram, the accuracy of tissue segmentation highly depends on the exclusion of pectoral muscle. Robust methods for such exclusions are essential as the normal presence of pectoral muscle can bias the decision of CAD. In this paper, a novel texture gradient-based approach for automatic segmentation of pectoral muscle is proposed. The pectoral edge is initially approximated to a straight line by applying Hough transform on Probable Texture Gradient (PTG) map of the mammogram followed by block averaging with the aid of approximated line. Furthermore, a smooth pectoral muscle curve is achieved with proposed Euclidean Distance Regression (EDR) technique and polynomial modeling. The algorithm is robust to texture and overlapping fibro glandular tissues. The method is validated with 340 MLO views from three databases-including 200 randomly selected scanned film images from miniMIAS, 100 computed radiography images and 40 full-field digital mammogram images. Qualitatively, 96.75 % of the pectoral muscles are segmented with an acceptable pectoral score index. The proposed method not only outperforms state-of-the-art approaches but also accurately quantifies the pectoral edge. Thus, its high accuracy and relatively quick processing time clearly justify its suitability for CAD.

  3. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  4. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  5. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  6. Formulation of radiometric feasibility measures for feature selection and planning in visual servoing.

    PubMed

    Janabi-Sharifi, Farrokh; Ficocelli, M

    2004-04-01

    Feature selection and planning are integral parts of visual servoing systems. Because many irrelevant and nonreliable image features usually exist, higher accuracy and robustness can be expected by selecting and planning good features. Assumption of perfect radiometric conditions is common in visual servoing. The following paper discusses the issue of radiometric constraints for feature selection in the context of visual servoing. Here, radiometric constraints are presented and measures are formulated to select the optimal features (in a radiometric sense) from a set of candidate features. Simulation and experimental results verify the effectiveness of the proposed measures.

  7. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  8. Robust neuronal dynamics in premotor cortex during motor planning

    PubMed Central

    Li, Nuo; Daie, Kayvon; Svoboda, Karel; Druckmann, Shaul

    2016-01-01

    Neural activity maintains representations that bridge past and future events, often over many seconds. Network models can produce persistent and ramping activity, but the positive feedback that is critical for these slow dynamics can cause sensitivity to perturbations. Here we use electrophysiology and optogenetic perturbations in mouse premotor cortex to probe robustness of persistent neural representations during motor planning. Preparatory activity is remarkably robust to large-scale unilateral silencing: detailed neural dynamics that drive specific future movements were quickly and selectively restored by the network. Selectivity did not recover after bilateral silencing of premotor cortex. Perturbations to one hemisphere are thus corrected by information from the other hemisphere. Corpus callosum bisections demonstrated that premotor cortex hemispheres can maintain preparatory activity independently. Redundancy across selectively coupled modules, as we observed in premotor cortex, is a hallmark of robust control systems. Network models incorporating these principles show robustness that is consistent with data. PMID:27074502

  9. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  10. The robustness of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Reka

    2002-03-01

    Many complex networks display a surprising degree of tolerance against errors. For example, organisms and ecosystems exhibit remarkable robustness to large variations in temperature, moisture, and nutrients, and communication networks continue to function despite local failures. This presentation will explore the effects of the network topology on its robust functioning. First, we will consider the topological integrity of several networks under node disruption. Then we will focus on the functional robustness of biological signaling networks, and the decisive role played by the network topology in this robustness.

  11. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  12. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  13. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  14. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  15. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  16. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing. PMID:26529374

  17. Robust control algorithms for Mars aerobraking

    NASA Technical Reports Server (NTRS)

    Shipley, Buford W., Jr.; Ward, Donald T.

    1992-01-01

    Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.

  18. Evolution under fluctuating environments explains observed robustness in metabolic networks.

    PubMed

    Soyer, Orkun S; Pfeiffer, Thomas

    2010-08-26

    A high level of robustness against gene deletion is observed in many organisms. However, it is still not clear which biochemical features underline this robustness and how these are acquired during evolution. One hypothesis, specific to metabolic networks, is that robustness emerges as a byproduct of selection for biomass production in different environments. To test this hypothesis we performed evolutionary simulations of metabolic networks under stable and fluctuating environments. We find that networks evolved under the latter scenario can better tolerate single gene deletion in specific environments. Such robustness is underlined by an increased number of independent fluxes and multifunctional enzymes in the evolved networks. Observed robustness in networks evolved under fluctuating environments was "apparent," in the sense that it decreased significantly as we tested effects of gene deletions under all environments experienced during evolution. Furthermore, when we continued evolution of these networks under a stable environment, we found that any robustness they had acquired was completely lost. These findings provide evidence that evolution under fluctuating environments can account for the observed robustness in metabolic networks. Further, they suggest that organisms living under stable environments should display lower robustness in their metabolic networks, and that robustness should decrease upon switching to more stable environments.

  19. High accuracy fuel flowmeter

    NASA Technical Reports Server (NTRS)

    1986-01-01

    All three flowmeter concepts (vortex, dual turbine, and angular momentum) were subjected to experimental and analytical investigation to determine the potential portotype performance. The three concepts were subjected to a comprehensive rating. Eight parameters of performance were evaluated on a zero-to-ten scale, weighted, and summed. The relative ratings of the vortex, dual turbine, and angular momentum flowmeters are 0.71, 1.00, and 0.95, respectively. The dual turbine flowmeter concept was selected as the primary candidate and the angular momentum flowmeter as the secondary candidate for prototype development and evaluation.

  20. Robust reflective pupil slicing technology

    NASA Astrophysics Data System (ADS)

    Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.

    2014-07-01

    Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly

  1. Diagnostic Accuracy of Xpert Test in Tuberculosis Detection: A Systematic Review and Meta-analysis

    PubMed Central

    Kaur, Ravdeep; Kachroo, Kavita; Sharma, Jitendar Kumar; Vatturi, Satyanarayana Murthy; Dang, Amit

    2016-01-01

    Background: World Health Organization (WHO) recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB) and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years), presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination. PMID:27013842

  2. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  3. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  4. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  5. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  6. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  7. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  8. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  9. Combining data fusion with multiresolution analysis for improving the classification accuracy of uterine EMG signals

    NASA Astrophysics Data System (ADS)

    Moslem, Bassam; Diab, Mohamad; Khalil, Mohamad; Marque, Catherine

    2012-12-01

    Multisensor data fusion is a powerful solution for solving difficult pattern recognition problems such as the classification of bioelectrical signals. It is the process of combining information from different sensors to provide a more stable and more robust classification decisions. We combine here data fusion with multiresolution analysis based on the wavelet packet transform (WPT) in order to classify real uterine electromyogram (EMG) signals recorded by 16 electrodes. Herein, the data fusion is done at the decision level by using a weighted majority voting (WMV) rule. On the other hand, the WPT is used to achieve significant enhancement in the classification performance of each channel by improving the discrimination power of the selected feature. We show that the proposed approach tested on our recorded data can improve the recognition accuracy in labor prediction and has a competitive and promising performance.

  10. Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair

    NASA Astrophysics Data System (ADS)

    Sasou, Akira; Kojima, Hiroaki

    2009-12-01

    Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.

  11. Multi-stage learning for robust lung segmentation in challenging CT volumes.

    PubMed

    Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.

  12. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.

    2012-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled

  13. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  14. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/.

  15. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  16. Discrimination networks for maximum selection.

    PubMed

    Jain, Brijnesh J; Wysotzki, Fritz

    2004-01-01

    We construct a novel discrimination network using differentiating units for maximum selection. In contrast to traditional competitive architectures like MAXNET the discrimination network does not only signal the winning unit, but also provides information about its evidence. In particular, we show that a discrimination network converges to a stable state within finite time and derive three characteristics: intensity normalization (P1), contrast enhancement (P2), and evidential response (P3). In order to improve the accuracy of the evidential response we incorporate distributed redundancy into the network. This leads to a system which is not only robust against failure of single units and noisy data, but also enables us to sharpen the focus on the problem given in terms of a more accurate evidential response. The proposed discrimination network can be regarded as a connectionist model for competitive learning by evidence.

  17. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  18. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  19. Selection Intensity in Genetic Algorithms with Generation Gaps

    SciTech Connect

    Cantu-Paz, E.

    2000-01-19

    This paper presents calculations of the selection intensity of common selection and replacement methods used in genetic algorithms (GAs) with generation gaps. The selection intensity measures the increase of the average fitness of the population after selection, and it can be used to predict the average fitness of the population at each iteration as well as the number of steps until the population converges to a unique solution. In addition, the theory explains the fast convergence of some algorithms with small generation gaps. The accuracy of the calculations was verified experimentally with a simple test function. The results of this study facilitate comparisons between different algorithms, and provide a tool to adjust the selection pressure, which is indispensable to obtain robust algorithms.

  20. Building robust conservation plans.

    PubMed

    Visconti, Piero; Joppa, Lucas

    2015-04-01

    Systematic conservation planning optimizes trade-offs between biodiversity conservation and human activities by accounting for socioeconomic costs while aiming to achieve prescribed conservation objectives. However, the most cost-efficient conservation plan can be very dissimilar to any other plan achieving the set of conservation objectives. This is problematic under conditions of implementation uncertainty (e.g., if all or part of the plan becomes unattainable). We determined through simulations of parallel implementation of conservation plans and habitat loss the conditions under which optimal plans have limited chances of implementation and where implementation attempts would fail to meet objectives. We then devised a new, flexible method for identifying conservation priorities and scheduling conservation actions. This method entails generating a number of alternative plans, calculating the similarity in site composition among all plans, and selecting the plan with the highest density of neighboring plans in similarity space. We compared our method with the classic method that maximizes cost efficiency with synthetic and real data sets. When implementation was uncertain--a common reality--our method provided higher likelihood of achieving conservation targets. We found that χ, a measure of the shortfall in objectives achieved by a conservation plan if the plan could not be implemented entirely, was the main factor determining the relative performance of a flexibility enhanced approach to conservation prioritization. Our findings should help planning authorities prioritize conservation efforts in the face of uncertainty about future condition and availability of sites.

  1. On preserving robustness-false alarm tradeoff in media hashing

    NASA Astrophysics Data System (ADS)

    Roy, S.; Zhu, X.; Yuan, J.; Chang, E.-C.

    2007-01-01

    This paper discusses one of the important issues in generating a robust media hash. Robustness of a media hashing algorithm is primarily determined by three factors, (1) robustness-false alarm tradeoff achieved by the chosen feature representation, (2) accuracy of the bit extraction step and (3) the distance measure used to measure similarity (dissimilarity) between two hashes. The robustness-false alarm tradeoff in feature space is measured by a similarity (dissimilarity) measure and it defines a limit on the performance of the hashing algorithm. The distance measure used to compute the distance between the hashes determines how far this tradeoff in the feature space is preserved through the bit extraction step. Hence the bit extraction step is crucial, in defining the robustness of a hashing algorithm. Although this is recognized as an important requirement by all, to our knowledge there is no work in the existing literature that elucidates the effcacy of their algorithm based on their effectiveness in improving this tradeoff compared to other methods. This paper specifically demonstrates the kind of robustness false alarm tradeoff achieved by existing methods and proposes a method for hashing that clearly improves this tradeoff.

  2. The efficacy of bedside chest ultrasound: from accuracy to outcomes.

    PubMed

    Hew, Mark; Tay, Tunn Ren

    2016-09-01

    For many respiratory physicians, point-of-care chest ultrasound is now an integral part of clinical practice. The diagnostic accuracy of ultrasound to detect abnormalities of the pleura, the lung parenchyma and the thoracic musculoskeletal system is well described. However, the efficacy of a test extends beyond just diagnostic accuracy. The true value of a test depends on the degree to which diagnostic accuracy efficacy influences decision-making efficacy, and the subsequent extent to which this impacts health outcome efficacy. We therefore reviewed the demonstrable levels of test efficacy for bedside ultrasound of the pleura, lung parenchyma and thoracic musculoskeletal system.For bedside ultrasound of the pleura, there is evidence supporting diagnostic accuracy efficacy, decision-making efficacy and health outcome efficacy, predominantly in guiding pleural interventions. For the lung parenchyma, chest ultrasound has an impact on diagnostic accuracy and decision-making for patients presenting with acute respiratory failure or breathlessness, but there are no data as yet on actual health outcomes. For ultrasound of the thoracic musculoskeletal system, there is robust evidence only for diagnostic accuracy efficacy.We therefore outline avenues to further validate bedside chest ultrasound beyond diagnostic accuracy, with an emphasis on confirming enhanced health outcomes. PMID:27581823

  3. Maximum Correntropy Criterion for Robust Face Recognition.

    PubMed

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang

    2011-08-01

    In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

  4. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  5. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  6. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Yeh, Chia-Hung; Kuo, C.-C. Jay

    2005-12-01

    A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1) the moving cast shadow effect, (2) the occlusion effect, and (3) nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  7. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  8. A paradigm shift in patterning foundation from frequency multiplication to edge-placement accuracy: a novel processing solution by selective etching and alternating-material self-aligned multiple patterning

    NASA Astrophysics Data System (ADS)

    Han, Ting; Liu, Hongyi; Chen, Yijian

    2016-03-01

    Overlay errors, cut/block and line/space critical-dimension (CD) variations are the major sources of the edge-placement errors (EPE) in the cut/block patterning processes of complementary lithography when IC technology is scaled down to sub-10nm half pitch (HP). In this paper, we propose and discuss a modular technology to reduce the EPE effect by combining selective etching and alternating-material (dual-material) self-aligned multiple patterning (altSAMP) processes. Preliminary results of altSAMP process development and material screening experiment are reported and possible material candidates are suggested. A geometrical cut-process yield model considering the joint effect of overlay errors, cut-hole and line CD variations is developed to analyze its patterning performance. In addition to the contributions from the above three process variations, the impacts of key control parameters (such as cut-hole overhang and etching selectivity) on the patterning yield are examined. It is shown that the optimized altSAMP patterning process significantly improves the patterning yield compared with conventional SAMP processes, especially when the half pitch of device patterns is driven down to 7 nm and below.

  9. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  10. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  11. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  12. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods. PMID:25291733

  13. Robustness of airline alliance route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Simo, Pep; Gonzalez-Prieto, David

    2015-05-01

    The aim of this study is to analyze the robustness of the three major airline alliances' (i.e., Star Alliance, oneworld and SkyTeam) route networks. Firstly, the normalization of a multi-scale measure of vulnerability is proposed in order to perform the analysis in networks with different sizes, i.e., number of nodes. An alternative node selection criterion is also proposed in order to study robustness and vulnerability of such complex networks, based on network efficiency. And lastly, a new procedure - the inverted adaptive strategy - is presented to sort the nodes in order to anticipate network breakdown. Finally, the robustness of the three alliance networks are analyzed with (1) a normalized multi-scale measure of vulnerability, (2) an adaptive strategy based on four different criteria and (3) an inverted adaptive strategy based on the efficiency criterion. The results show that Star Alliance has the most resilient route network, followed by SkyTeam and then oneworld. It was also shown that the inverted adaptive strategy based on the efficiency criterion - inverted efficiency - shows a great success in quickly breaking networks similar to that found with betweenness criterion but with even better results.

  14. S/HIC: Robust Identification of Soft and Hard Sweeps Using Machine Learning

    PubMed Central

    Schrider, Daniel R.; Kern, Andrew D.

    2016-01-01

    Detecting the targets of adaptive natural selection from whole genome sequencing data is a central problem for population genetics. However, to date most methods have shown sub-optimal performance under realistic demographic scenarios. Moreover, over the past decade there has been a renewed interest in determining the importance of selection from standing variation in adaptation of natural populations, yet very few methods for inferring this model of adaptation at the genome scale have been introduced. Here we introduce a new method, S/HIC, which uses supervised machine learning to precisely infer the location of both hard and soft selective sweeps. We show that S/HIC has unrivaled accuracy for detecting sweeps under demographic histories that are relevant to human populations, and distinguishing sweeps from linked as well as neutrally evolving regions. Moreover, we show that S/HIC is uniquely robust among its competitors to model misspecification. Thus, even if the true demographic model of a population differs catastrophically from that specified by the user, S/HIC still retains impressive discriminatory power. Finally, we apply S/HIC to the case of resequencing data from human chromosome 18 in a European population sample, and demonstrate that we can reliably recover selective sweeps that have been identified earlier using less specific and sensitive methods. PMID:26977894

  15. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  16. Probabilistic collocation for simulation-based robust concept exploration

    NASA Astrophysics Data System (ADS)

    Rippel, Markus; Choi, Seung-Kyum; Allen, Janet K.; Mistree, Farrokh

    2012-08-01

    In the early stages of an engineering design process it is necessary to explore the design space to find a feasible range that satisfies design requirements. When robustness of the system is among the requirements, the robust concept exploration method can be used. In this method, a global metamodel, such as a global response surface of the design space, is used to evaluate robustness. However, for large design spaces, this is computationally expensive and may be relatively inaccurate for some local regions. In this article, a method is developed for successively generating local response models at points of interest as the design space is explored. This approach is based on the probabilistic collocation method. Although the focus of this article is on the method, it is demonstrated using an artificial performance function and a linear cellular alloy heat exchanger. For these problems, this approach substantially reduces computation time while maintaining accuracy.

  17. Highly Fluorinated Ir(III)-2,2':6',2″-Terpyridine-Phenylpyridine-X Complexes via Selective C-F Activation: Robust Photocatalysts for Solar Fuel Generation and Photoredox Catalysis.

    PubMed

    Porras, Jonathan A; Mills, Isaac N; Transue, Wesley J; Bernhard, Stefan

    2016-08-01

    A series of fluorinated Ir(III)-terpyridine-phenylpyridine-X (X = anionic monodentate ligand) complexes were synthesized by selective C-F activation, whereby perfluorinated phenylpyridines were readily complexed. The combination of fluorinated phenylpyridine ligands with an electron-rich tri-tert-butyl terpyridine ligand generates a "push-pull" force on the electrons upon excitation, imparting significant enhancements to the stability, electrochemical, and photophysical properties of the complexes. Application of the complexes as photosensitizers for photocatalytic generation of hydrogen from water and as redox photocatalysts for decarboxylative fluorination of several carboxylic acids showcases the performance of the complexes in highly coordinating solvents, in some cases exceeding that of the leading photosensitizers. Changes in the photophysical properties and the nature of the excited states are observed as the compounds increase in fluorination as well as upon exchange of the ancillary chloride ligand to a cyanide. These changes in the excited states have been corroborated using density functional theory modeling. PMID:27387149

  18. Utilization of highly robust and selective crosslinked polymeric ionic liquid-based sorbent coatings in direct-immersion solid-phase microextraction and high-performance liquid chromatography for determining polar organic pollutants in waters.

    PubMed

    Pacheco-Fernández, Idaira; Najafi, Ali; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M

    2016-09-01

    Several crosslinked polymeric ionic liquid (PIL)-based sorbent coatings of different nature were prepared by UV polymerization onto nitinol wires. They were evaluated in a direct-immersion solid-phase microextraction (DI-SPME) method in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The studied PIL coatings contained either vinyl alkyl or vinylbenzyl imidazolium-based (ViCnIm- or ViBCnIm-) IL monomers with different anions, as well as different dicationic IL crosslinkers. The analytical performance of these PIL-based SPME coatings was firstly evaluated for the extraction of a group of 10 different model analytes, including hydrocarbons and phenols, while exhaustively comparing the performance with commercial SPME fibers such as polydimethylsyloxane (PDMS), polyacrylate (PA) and polydimethylsiloxane/divinylbenzene (PDMS/DVB), and using all fibers under optimized conditions. Those fibers exhibiting a high selectivity for polar compounds were selected to carry out an analytical method for a group of 5 alkylphenols, including bisphenol-A (BPA) and nonylphenol (n-NP). Under optimum conditions, average relative recoveries of 108% and inter-day precision values (3 non-consecutive days) lower than 19% were obtained for a spiked level of 10µgL(-1). Correlations coefficients for the overall method ranged between 0.990 and 0.999, and limits of detection were down to 1µgL(-1). Tap water, river water, and bottled water were analyzed to evaluate matrix effects. Comparison with the PA fiber was also performed in terms of analytical performance. Partition coefficients (logKfs) of the alkylphenols to the SPME coating varied from 1.69 to 2.45 for the most efficient PIL-based fiber, and from 1.58 to 2.30 for the PA fiber. These results agree with those obtained by the normalized calibration slopes, pointing out the affinity of these PILs-based coatings.

  19. Utilization of highly robust and selective crosslinked polymeric ionic liquid-based sorbent coatings in direct-immersion solid-phase microextraction and high-performance liquid chromatography for determining polar organic pollutants in waters.

    PubMed

    Pacheco-Fernández, Idaira; Najafi, Ali; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M

    2016-09-01

    Several crosslinked polymeric ionic liquid (PIL)-based sorbent coatings of different nature were prepared by UV polymerization onto nitinol wires. They were evaluated in a direct-immersion solid-phase microextraction (DI-SPME) method in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The studied PIL coatings contained either vinyl alkyl or vinylbenzyl imidazolium-based (ViCnIm- or ViBCnIm-) IL monomers with different anions, as well as different dicationic IL crosslinkers. The analytical performance of these PIL-based SPME coatings was firstly evaluated for the extraction of a group of 10 different model analytes, including hydrocarbons and phenols, while exhaustively comparing the performance with commercial SPME fibers such as polydimethylsyloxane (PDMS), polyacrylate (PA) and polydimethylsiloxane/divinylbenzene (PDMS/DVB), and using all fibers under optimized conditions. Those fibers exhibiting a high selectivity for polar compounds were selected to carry out an analytical method for a group of 5 alkylphenols, including bisphenol-A (BPA) and nonylphenol (n-NP). Under optimum conditions, average relative recoveries of 108% and inter-day precision values (3 non-consecutive days) lower than 19% were obtained for a spiked level of 10µgL(-1). Correlations coefficients for the overall method ranged between 0.990 and 0.999, and limits of detection were down to 1µgL(-1). Tap water, river water, and bottled water were analyzed to evaluate matrix effects. Comparison with the PA fiber was also performed in terms of analytical performance. Partition coefficients (logKfs) of the alkylphenols to the SPME coating varied from 1.69 to 2.45 for the most efficient PIL-based fiber, and from 1.58 to 2.30 for the PA fiber. These results agree with those obtained by the normalized calibration slopes, pointing out the affinity of these PILs-based coatings. PMID:27343586

  20. FTRAC--A robust fluoroscope tracking fiducial

    SciTech Connect

    Jain, Ameet Kumar; Mustafa, Tabish; Zhou, Yu; Burdette, Clif; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-10-15

    C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct three-dimensional (3D) information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the x-ray image, in 3D space. Optical/magnetic trackers tend to be prohibitively expensive, intrusive and cumbersome in many applications. We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of ellipses, lines, and points. This is an improvement over contemporary fiducials, which use only points. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A nonlinear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3x3x5 cm); need not be close to the anatomy of interest; and accurately segmentable. We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery in phantom experiments had an accuracy of 0.56 mm in translation and 0.33 deg. in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. The method offers accuracies similar to commercial tracking systems, and appears to be sufficiently robust for intraoperative quantitative C-arm fluoroscopy. Simulation experiments indicate that the size can be further reduced to 1x1x2 cm, with only a marginal drop in accuracy.

  1. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  2. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  3. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use.

  4. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  5. A robust linear feature-based procedure for automated registration of point clouds.

    PubMed

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%.

  6. Spaceborne SAR data for global urban mapping at 30 m resolution using a robust urban extractor

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Jacob, Alexander; Gamba, Paolo

    2015-05-01

    With more than half of the world population now living in cities and 1.4 billion more people expected to move into cities by 2030, urban areas pose significant challenges on local, regional and global environment. Timely and accurate information on spatial distributions and temporal changes of urban areas are therefore needed to support sustainable development and environmental change research. The objective of this research is to evaluate spaceborne SAR data for improved global urban mapping using a robust processing chain, the KTH-Pavia Urban Extractor. The proposed processing chain includes urban extraction based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures, an existing method and several improvements i.e., SAR data preprocessing, enhancement, and post-processing. ENVISAT Advanced Synthetic Aperture Radar (ASAR) C-VV data at 30 m resolution were selected over 10 global cities and a rural area from six continents to demonstrate the robustness of the improved method. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from ENVISAT ASAR data and built-up areas can be mapped at 30 m resolution with very good accuracy using only one or two SAR images. These findings indicate that operational global urban mapping is possible with spaceborne SAR data, especially with the launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  7. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images

    PubMed Central

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  8. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images.

    PubMed

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  9. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images.

    PubMed

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms.

  10. Robust control of hypersonic aircraft

    NASA Astrophysics Data System (ADS)

    Fan, Yong-hua; Yang, Jun; Zhang, Yu-zhuo

    2007-11-01

    Design of a robust controller for the longitudinal dynamics of a hypersonic aircraft by using parameter space method is present. The desirable poles are mapped to the parameter space of the controller using pole placement approach in this method. The intersection of the parameter space is the common controller for the multiple mode system. This controller can meet the need of the different phases of aircraft. It has been proved by simulation that the controller has highly performance of precision and robustness for the disturbance caused by separation, cowl open, fuel on and fuel off and perturbation caused by unknown dynamics.

  11. Robust Sparse Blind Source Separation

    NASA Astrophysics Data System (ADS)

    Chenot, Cecile; Bobin, Jerome; Rapin, Jeremy

    2015-11-01

    Blind Source Separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm coined rGMCA (robust Generalized Morphological Component Analysis) is introduced to retrieve sparse sources in the presence of outliers. It explicitly estimates the sources, the mixing matrix, and the outliers. It also takes advantage of the estimation of the outliers to further implement a weighting scheme, which provides a highly robust separation procedure. Numerical experiments demonstrate the efficiency of rGMCA to estimate the mixing matrix in comparison with standard BSS techniques.

  12. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  13. Modern Robust Statistical Methods: An Easy Way to Maximize the Accuracy and Power of Your Research

    ERIC Educational Resources Information Center

    Erceg-Hurn, David M.; Mirosevich, Vikki M.

    2008-01-01

    Classic parametric statistical significance tests, such as analysis of variance and least squares regression, are widely used by researchers in many disciplines, including psychology. For classic parametric tests to produce accurate results, the assumptions underlying them (e.g., normality and homoscedasticity) must be satisfied. These assumptions…

  14. Demographic corrections appear to compromise classification accuracy for severely skewed cognitive tests.

    PubMed

    O'Connell, Megan E; Tuokko, Holly; Kadlec, Helena

    2011-04-01

    Demographic corrections for cognitive tests should improve classification accuracy by reducing age or education biases, but empirical support has been equivocal. Using a simulation procedure, we show that creating moderate or extreme skewness in cognitive tests compromises the classification accuracy of demographic corrections, findings that appear replicated within clinical data for the few neuropsychological test scores with an extreme degree of skew. For most neuropsychological tests, the dementia classification accuracy of raw and demographically corrected scores was equivalent. These findings suggest that the dementia classification accuracy of demographic corrections is robust to slight degrees of skew (i.e., skewness <1.5). PMID:21154077

  15. Network Robustness: the whole story

    NASA Astrophysics Data System (ADS)

    Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.

    2014-12-01

    A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.

  16. Robust Sliding Window Synchronizer Developed

    NASA Technical Reports Server (NTRS)

    Chun, Kue S.; Xiong, Fuqin; Pinchak, Stanley

    2004-01-01

    The development of an advanced robust timing synchronization scheme is crucial for the support of two NASA programs--Advanced Air Transportation Technologies and Aviation Safety. A mobile aeronautical channel is a dynamic channel where various adverse effects--such as Doppler shift, multipath fading, and shadowing due to precipitation, landscape, foliage, and buildings--cause the loss of symbol timing synchronization.

  17. Mental Models: A Robust Definition

    ERIC Educational Resources Information Center

    Rook, Laura

    2013-01-01

    Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…

  18. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  19. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  20. Robust multiplatform RF emitter localization

    NASA Astrophysics Data System (ADS)

    Al Issa, Huthaifa; Ordóñez, Raúl

    2012-06-01

    In recent years, position based services has increase. Thus, recent developments in communications and RF technology have enabled system concept formulations and designs for low-cost radar systems using state-of-the-art software radio modules. This research is done to investigate a novel multi-platform RF emitter localization technique denoted as Position-Adaptive RF Direction Finding (PADF). The formulation is based on the investigation of iterative path-loss (i.e., Path Loss Exponent, or PLE) metrics estimates that are measured across multiple platforms in order to autonomously adapt (i.e. self-adjust) of the location of each distributed/cooperative platform. Experiments conducted at the Air-Force Research laboratory (AFRL) indicate that this position-adaptive approach exhibits potential for accurate emitter localization in challenging embedded multipath environments such as in urban environments. The focus of this paper is on the robustness of the distributed approach to RF-based location tracking. In order to localize the transmitter, we use the Received Signal Strength Indicator (RSSI) data to approximate distance from the transmitter to the revolving receivers. We provide an algorithm for on-line estimation of the Path Loss Exponent (PLE) that is used in modeling the distance based on Received Signal Strength (RSS) measurements. The emitter position estimation is calculated based on surrounding sensors RSS values using Least-Square Estimation (LSE). The PADF has been tested on a number of different configurations in the laboratory via the design and implementation of four IRIS wireless sensor nodes as receivers and one hidden sensor as a transmitter during the localization phase. The robustness of detecting the transmitters position is initiated by getting the RSSI data through experiments and then data manipulation in MATLAB will determine the robustness of each node and ultimately that of each configuration. The parameters that are used in the functions are

  1. Random Forest (RF) Wrappers for Waveband Selection and Classification of Hyperspectral Data.

    PubMed

    Poona, Nitesh Keshavelal; van Niekerk, Adriaan; Nadel, Ryan Leslie; Ismail, Riyad

    2016-02-01

    Hyperspectral data collected using a field spectroradiometer was used to model asymptomatic stress in Pinus radiata and Pinus patula seedlings infected with the pathogen Fusarium circinatum. Spectral data were analyzed using the random forest algorithm. To improve the classification accuracy of the model, subsets of wavebands were selected using three feature selection algorithms: (1) Boruta; (2) recursive feature elimination (RFE); and (3) area under the receiver operating characteristic curve of the random forest (AUC-RF). Results highlighted the robustness of the above feature selection methods when used in conjunction with the random forest algorithm for analyzing hyperspectral data. Overall, the Boruta feature selection algorithm provided the best results. When discriminating F. circinatum stress in Pinus radiata seedlings, Boruta selected wavebands (n = 69) yielded the best overall classification accuracies (training error of 17.00%, independent test error of 17.00% and an AUC value of 0.91). Classification results were, however, significantly lower for P. patula seedlings, with a training error of 24.00%, independent test error of 38.00%, and an AUC value of 0.65. A hybrid selection method that utilizes combinations of wavebands selected from the three feature selection algorithms was also tested. The hybrid method showed an improvement in classification accuracies for P. patula, and no improvement for P. radiata. The results of this study provide impetus towards implementing a hyperspectral framework for detecting stress within nursery environments.

  2. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  3. Orbit accuracy assessment for Seasat

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.

    1980-01-01

    Laser range measurements are used to determine the orbit of Seasat during the period from July 28, 1978, to Aug. 14, 1978, and the influence of the gravity field, atmospheric drag, and solar radiation pressure on the orbit accuracy is investigated. It is noted that for the orbits of three-day duration, little distinction can be made between the influence of different atmospheric models. It is found that the special Seasat gravity field PGS-S3 is most consistent with the data for three-day orbits, but an unmodeled systematic effect in radiation pressure is noted. For orbits of 18-day duration, little distinction can be made between the results derived from the PGS gravity fields. It is also found that the geomagnetic field is an influential factor in the atmospheric modeling during this time period. Seasat altimeter measurements are used to determine the accuracy of the altimeter measurement time tag and to evaluate the orbital accuracy.

  4. Robust expertise effects in right FFA.

    PubMed

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2014-10-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7T, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories were rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2, respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas.

  5. Robust video hashing via multilinear subspace projections.

    PubMed

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  6. Robust expertise effects in right FFA

    PubMed Central

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2015-01-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7Telsa, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2 respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas. PMID:25192631

  7. Robust Mosaicking of Uav Images with Narrow Overlaps

    NASA Astrophysics Data System (ADS)

    Kim, J.; Kim, T.; Shin, D.; Kim, S. H.

    2016-06-01

    This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.

  8. Robustness of ordinary least squares in randomized clinical trials.

    PubMed

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors.

  9. Robust on-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi

    1993-01-01

    The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.

  10. COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy

    NASA Astrophysics Data System (ADS)

    Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun

    2014-09-01

    In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.

  11. Data Accuracy in Citation Studies.

    ERIC Educational Resources Information Center

    Boyce, Bert R.; Banning, Carolyn Sue

    1979-01-01

    Four hundred eighty-seven citations of the 1976 issues of the Journal of the American Society for Information Science and the Personnel and Guidance Journal were checked for accuracy: total error was 13.6 percent and 10.7 percent, respectively. Error categories included incorrect author name, article/book title, journal title; wrong entry; and…

  12. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  13. Origin of robustness in generating drug-resistant malaria parasites.

    PubMed

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W; Kamchonwongpaisan, Sumalee; Fidock, David A; Kirkman, Laura A; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-07-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs.

  14. Origin of Robustness in Generating Drug-Resistant Malaria Parasites

    PubMed Central

    Kümpornsin, Krittikorn; Modchang, Charin; Heinberg, Adina; Ekland, Eric H.; Jirawatcharadech, Piyaporn; Chobson, Pornpimol; Suwanakitti, Nattida; Chaotheing, Sastra; Wilairat, Prapon; Deitsch, Kirk W.; Kamchonwongpaisan, Sumalee; Fidock, David A.; Kirkman, Laura A.; Yuthavong, Yongyuth; Chookajorn, Thanat

    2014-01-01

    Biological robustness allows mutations to accumulate while maintaining functional phenotypes. Despite its crucial role in evolutionary processes, the mechanistic details of how robustness originates remain elusive. Using an evolutionary trajectory analysis approach, we demonstrate how robustness evolved in malaria parasites under selective pressure from an antimalarial drug inhibiting the folate synthesis pathway. A series of four nonsynonymous amino acid substitutions at the targeted enzyme, dihydrofolate reductase (DHFR), render the parasites highly resistant to the antifolate drug pyrimethamine. Nevertheless, the stepwise gain of these four dhfr mutations results in tradeoffs between pyrimethamine resistance and parasite fitness. Here, we report the epistatic interaction between dhfr mutations and amplification of the gene encoding the first upstream enzyme in the folate pathway, GTP cyclohydrolase I (GCH1). gch1 amplification confers low level pyrimethamine resistance and would thus be selected for by pyrimethamine treatment. Interestingly, the gch1 amplification can then be co-opted by the parasites because it reduces the cost of acquiring drug-resistant dhfr mutations downstream in the same metabolic pathway. The compensation of compromised fitness by extra GCH1 is an example of how robustness can evolve in a system and thus expand the accessibility of evolutionary trajectories leading toward highly resistant alleles. The evolution of robustness during the gain of drug-resistant mutations has broad implications for both the development of new drugs and molecular surveillance for resistance to existing drugs. PMID:24739308

  15. A natural class of robust networks.

    PubMed

    Aldana, Maximino; Cluzel, Philippe

    2003-07-22

    As biological studies shift from molecular description to system analysis we need to identify the design principles of large intracellular networks. In particular, without knowing the molecular details, we want to determine how cells reliably perform essential intracellular tasks. Recent analyses of signaling pathways and regulatory transcription networks have revealed a common network architecture, termed scale-free topology. Although the structural properties of such networks have been thoroughly studied, their dynamical properties remain largely unexplored. We present a prototype for the study of dynamical systems to predict the functional robustness of intracellular networks against variations of their internal parameters. We demonstrate that the dynamical robustness of these complex networks is a direct consequence of their scale-free topology. By contrast, networks with homogeneous random topologies require fine-tuning of their internal parameters to sustain stable dynamical activity. Considering the ubiquity of scale-free networks in nature, we hypothesize that this topology is not only the result of aggregation processes such as preferential attachment; it may also be the result of evolutionary selective processes. PMID:12853565

  16. Algebraic connectivity and graph robustness.

    SciTech Connect

    Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.

    2009-07-01

    Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

  17. Robust dynamic mitigation of instabilities

    NASA Astrophysics Data System (ADS)

    Kawata, S.; Karino, T.

    2015-04-01

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  18. Robust dynamic mitigation of instabilities

    SciTech Connect

    Kawata, S.; Karino, T.

    2015-04-15

    A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.

  19. Robust, optimal subsonic airfoil shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2008-01-01

    Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.

  20. Drawing accuracy measured using polygons

    NASA Astrophysics Data System (ADS)

    Carson, Linda; Millard, Matthew; Quehl, Nadine; Danckert, James

    2013-03-01

    The study of drawing, for its own sake and as a probe into human visual perception, generally depends on ratings by human critics and self-reported expertise of the drawers. To complement those approaches, we have developed a geometric approach to analyzing drawing accuracy, one whose measures are objective, continuous and performance-based. Drawing geometry is represented by polygons formed by landmark points found in the drawing. Drawing accuracy is assessed by comparing the geometric properties of polygons in the drawn image to the equivalent polygon in a ground truth photo. There are four distinct properties of a polygon: its size, its position, its orientation and the proportionality of its shape. We can decompose error into four components and investigate how each contributes to drawing performance. We applied a polygon-based accuracy analysis to a pilot data set of representational drawings and found that an expert drawer outperformed a novice on every dimension of polygon error. The results of the pilot data analysis correspond well with the apparent quality of the drawings, suggesting that the landmark and polygon analysis is a method worthy of further study. Applying this geometric analysis to a within-subjects comparison of accuracy in the positive and negative space suggests there is a trade-off on dimensions of error. The performance-based analysis of geometric deformations will allow the study of drawing accuracy at different levels of organization, in a systematic and quantitative manner. We briefly describe the method and its potential applications to research in drawing education and visual perception.

  1. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  2. High accuracy in situ radiometric mapping.

    PubMed

    Tyler, Andrew N

    2004-01-01

    In situ and airborne gamma ray spectrometry have been shown to provide rapid and spatially representative estimates of environmental radioactivity across a range of landscapes. However, one of the principal limitations of this technique has been the influence of changes in the vertical distribution of the source (e.g. 137Cs) on the observed photon fluence resulting in a significant reduction in the accuracy of the in situ activity measurement. A flexible approach for single gamma photon emitting radionuclides is presented, which relies on the quantification of forward scattering (or valley region between the full energy peak and Compton edge) within the gamma ray spectrum to compensate for changes in the 137Cs vertical activity distribution. This novel in situ method lends itself to the mapping of activity concentrations in environments that exhibit systematic changes in the vertical activity distribution. The robustness of this approach has been demonstrated in a salt marsh environment on the Solway coast, SW Scotland, with both a 7.6 cm x 7.6 cm NaI(Tl) detector and a 35% n-type HPGe detector. Application to ploughed field environments has also been demonstrated using HPGe detector, including its application to the estimation of field moist bulk density and soil erosion measurement. Ongoing research work is also outlined.

  3. Robust flight control of rotorcraft

    NASA Astrophysics Data System (ADS)

    Pechner, Adam Daniel

    With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.

  4. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  5. Mu-Synthesis robust control of 3D bar structure vibration using piezo-stack actuators

    NASA Astrophysics Data System (ADS)

    Mystkowski, Arkadiusz; Koszewnik, Andrzej Piotr

    2016-10-01

    This paper presents an idea for the Mu-Synthesis robust control of 3D bar structure vibration with using a piezo-stack actuators. A model of the 3D bar structure with uncertain parameters is presented as multi-input multi-output (MIMO) dynamics. Nominal stability and nominal performances of the open-loop 3D bar structure dynamic model is developed. The uncertain model-based robust controller is derived due to voltage control signal saturation and selected parameter perturbations. The robust control performances and robustness of the system due to uncertainties influence is evaluated by using singular values and a small gain theorem. Finally, simulation investigations and experimental results shown that system response of the 3D bar structure dynamic model with taken into account perturbed parameters met desired robust stability and system limits. The proposed robust controller ensures a good dynamics of the closed-loop system, robustness, and vibration attenuation.

  6. Testing robustness of relative complexity measure method constructing robust phylogenetic trees for Galanthus L. Using the relative complexity measure

    PubMed Central

    2013-01-01

    Background Most phylogeny analysis methods based on molecular sequences use multiple alignment where the quality of the alignment, which is dependent on the alignment parameters, determines the accuracy of the resulting trees. Different parameter combinations chosen for the multiple alignment may result in different phylogenies. A new non-alignment based approach, Relative Complexity Measure (RCM), has been introduced to tackle this problem and proven to work in fungi and mitochondrial DNA. Result In this work, we present an application of the RCM method to reconstruct robust phylogenetic trees using sequence data for genus Galanthus obtained from different regions in Turkey. Phylogenies have been analyzed using nuclear and chloroplast DNA sequences. Results showed that, the tree obtained from nuclear ribosomal RNA gene sequences was more robust, while the tree obtained from the chloroplast DNA showed a higher degree of variation. Conclusions Phylogenies generated by Relative Complexity Measure were found to be robust and results of RCM were more reliable than the compared techniques. Particularly, to overcome MSA-based problems, RCM seems to be a reasonable way and a good alternative to MSA-based phylogenetic analysis. We believe our method will become a mainstream phylogeny construction method especially for the highly variable sequence families where the accuracy of the MSA heavily depends on the alignment parameters. PMID:23323678

  7. Contribution of Sample Processing to Variability and Accuracy of the Results of Pesticide Residue Analysis in Plant Commodities.

    PubMed

    Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert

    2016-08-10

    Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282

  8. Robust defect segmentation in woven fabrics

    SciTech Connect

    Sari-Sarraf, H.; Goddard, J.S. Jr.

    1997-12-01

    This paper describes a robust segmentation algorithm for the detection and localization of woven fabric defects. The essence of the presented segmentation algorithm is the localization of those events (i.e., defects) in the input images that disrupt the global homogeneity of the background texture. To this end, preprocessing modules, based on the wavelet transform and edge fusion, are employed with the objective of attenuating the background texture and accentuating the defects. Then, texture features are utilized to measure the global homogeneity of the output images. If these images are deemed to be globally nonhomogeneous (i.e., defects are present), a local roughness measure is used to localize the defects. The utility of this algorithm can be extended beyond the specific application in this work, that is, defect segmentation in woven fabrics. Indeed, in a general sense, this algorithm can be used to detect and to localize anomalies that reside in images characterized by ordered texture. The efficacy of this algorithm has been tested thoroughly under realistic conditions and as a part of an on-line fabric inspection system. Using over 3700 images of fabrics, containing 26 different types of defects, the overall detection rate of this approach was 89% with a localization accuracy of less than 0.2 inches and a false alarm rate of 2.5%.

  9. A robust fluoroscope tracking (FTRAC) fiducial

    NASA Astrophysics Data System (ADS)

    Jain, Ameet K.; Mustufa, Tabish; Zhou, Yu; Burdette, E. C.; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-04-01

    Purpose: C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct 3D information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the X-ray image, in 3D space. Optical/magnetic trackers are prohibitively expensive, intrusive and cumbersome. Method: We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of points, lines, and ellipses. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A non-linear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3 x 3 x 5 cm), it need not be close to the anatomy of interest and can be segmented automatically. Results: We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery had an error of 0.56 mm in translation and 0.33° in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. Conclusion: The method offers accuracies similar to commercial tracking systems, and is sufficiently robust for intra-operative quantitative C-arm fluoroscopy.

  10. Robustness and constraints of ambient noise inversion.

    PubMed

    Arvelo, Juan I

    2008-02-01

    One of the most dominant sources of error in the estimation of sonar performance in shallow water is the geoacoustic description of the sea floor. As reviewed in this paper, various investigators have studied the possible use of ambient noise to infer some key parameters such as the critical angle, geoacoustic properties, or bottom loss. A simple measurement approach to infer the bottom loss from ambient noise measurement on a vertical line array (VLA) is very attractive from environmental and operational perspectives. This paper presents a sensitivity study conducted with simulations and measurements that demonstrates mitigating factors to maximize the accuracy of estimated bottom loss. This paper quantifies the robustness and operational constraints of this measurement approach using an ambient noise model that accounts for wind, shipping, and thermal noise. Also demonstrated are the effects of unaccounted water absorption, array tilt, nearby ship interference, flow noise, calibration error, and array deformation on sonar performance estimation. VLA measurements collected during the Asian Seas International Acoustics Experiment in May-June 2001 were also processed to validate the approach via comparisons with measured bottom loss and transmission loss.

  11. Robustness of Massively Parallel Sequencing Platforms.

    PubMed

    Kavak, Pınar; Yüksel, Bayram; Aksu, Soner; Kulekci, M Oguzhan; Güngör, Tunga; Hach, Faraz; Şahinalp, S Cenk; Alkan, Can; Sağıroğlu, Mahmut Şamil

    2015-01-01

    The improvements in high throughput sequencing technologies (HTS) made clinical sequencing projects such as ClinSeq and Genomics England feasible. Although there are significant improvements in accuracy and reproducibility of HTS based analyses, the usability of these types of data for diagnostic and prognostic applications necessitates a near perfect data generation. To assess the usability of a widely used HTS platform for accurate and reproducible clinical applications in terms of robustness, we generated whole genome shotgun (WGS) sequence data from the genomes of two human individuals in two different genome sequencing centers. After analyzing the data to characterize SNPs and indels using the same tools (BWA, SAMtools, and GATK), we observed significant number of discrepancies in the call sets. As expected, the most of the disagreements between the call sets were found within genomic regions containing common repeats and segmental duplications, albeit only a small fraction of the discordant variants were within the exons and other functionally relevant regions such as promoters. We conclude that although HTS platforms are sufficiently powerful for providing data for first-pass clinical tests, the variant predictions still need to be confirmed using orthogonal methods before using in clinical applications.

  12. A Gossip-based Energy Efficient Protocol for Robust In-network Aggregation in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Fauji, Shantanu

    We consider the problem of energy efficient and fault tolerant in--network aggregation for wireless sensor networks (WSNs). In-network aggregation is the process of aggregation while collecting data from sensors to the base station. This process should be energy efficient due to the limited energy at the sensors and tolerant to the high failure rates common in sensor networks. Tree based in--network aggregation protocols, although energy efficient, are not robust to network failures. Multipath routing protocols are robust to failures to a certain degree but are not energy efficient due to the overhead in the maintenance of multiple paths. We propose a new protocol for in-network aggregation in WSNs, which is energy efficient, achieves high lifetime, and is robust to the changes in the network topology. Our protocol, gossip--based protocol for in-network aggregation (GPIA) is based on the spreading of information via gossip. GPIA is not only adaptive to failures and changes in the network topology, but is also energy efficient. Energy efficiency of GPIA comes from all the nodes being capable of selective message reception and detecting convergence of the aggregation early. We experimentally show that GPIA provides significant improvement over some other competitors like the Ridesharing, Synopsis Diffusion and the pure version of gossip. GPIA shows ten fold, five fold and two fold improvement over the pure gossip, the synopsis diffusion and Ridesharing protocols in terms of network lifetime, respectively. Further, GPIA retains gossip's robustness to failures and improves upon the accuracy of synopsis diffusion and Ridesharing.

  13. Robust multisource sound localization using temporal power fusion

    NASA Astrophysics Data System (ADS)

    Aarabi, Parham

    2001-03-01

    In the past several years, many different algorithms have attempted to address the problem of robust multi-source time difference of arrival (TDOA) estimation, which is necessary for sound localization. Different approaches, including general cross correlation, multiple signal classification (MUSIC), and the maximum likelihood (ML) approach, have made different trade- offs between robustness and efficiency. A new approach presented here offers a much more efficient yet robust mechanism for TDOA estimation. This approach iteratively uses small sound signal segments to compute a local cross-correlation based TDOA estimate. All of the different local estimates are combined to form the probability density function of the TDOA. Because the power of the secondary sources will be greater than the others for a certain set of the local signal segments, the TDOA corresponding to these sources will be associated with a peak in the TDOA probability density function. This way, the TDOAs of several different sources, along with their signal strength can be estimated. A real time implementation of the proposed approach is used to show its improved accuracy and robustness. The system was consistently able to correctly localize sound sources with SNRs as low as 3 dB.

  14. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  15. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution. PMID:24987749

  16. Evaluation of selection index: application to the choice of an indirect multitrait selection index for soybean breeding.

    PubMed

    Bouchez, A; Goffinet, B

    1990-02-01

    Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information. PMID:24226228

  17. High accuracy flexural hinge development

    NASA Astrophysics Data System (ADS)

    Santos, I.; Ortiz de Zárate, I.; Migliorero, G.

    2005-07-01

    This document provides a synthesis of the technical results obtained in the frame of the HAFHA (High Accuracy Flexural Hinge Assembly) development performed by SENER (in charge of design, development, manufacturing and testing at component and mechanism levels) with EADS Astrium as subcontractor (in charge of doing an inventory of candidate applications among existing and emerging projects, establishing the requirements and perform system level testing) under ESA contract. The purpose of this project has been to develop a competitive technology for a flexural pivot, usuable in highly accurate and dynamic pointing/scanning mechanisms. Compared with other solutions (e.g. magnetic or ball bearing technologies) flexural hinges are the appropriate technology for guiding with accuracy a mobile payload over a limited angular ranges around one rotation axes.

  18. Genotype by environment interaction and breeding for robustness in livestock

    PubMed Central

    Rauw, Wendy M.; Gomez-Raya, Luis

    2015-01-01

    The increasing size of the human population is projected to result in an increase in meat consumption. However, at the same time, the dominant position of meat as the center of meals is on the decline. Modern objections to the consumption of meat include public concerns with animal welfare in livestock production systems. Animal breeding practices have become part of the debate since it became recognized that animals in a population that have been selected for high production efficiency are more at risk for behavioral, physiological and immunological problems. As a solution, animal breeding practices need to include selection for robustness traits, which can be implemented through the use of reaction norms analysis, or though the direct inclusion of robustness traits in the breeding objective and in the selection index. This review gives an overview of genotype × environment interactions (the influence of the environment, reaction norms, phenotypic plasticity, canalization, and genetic homeostasis), reaction norms analysis in livestock production, options for selection for increased levels of production and against environmental sensitivity, and direct inclusion of robustness traits in the selection index. Ethical considerations of breeding for improved animal welfare are discussed. The discussion on animal breeding practices has been initiated and is very alive today. This positive trend is part of the sustainable food production movement that aims at feeding 9.15 billion people not just in the near future but also beyond. PMID:26539207

  19. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  20. A comparison of various optimization algorithms of protein-ligand docking programs by fitness accuracy.

    PubMed

    Guo, Liyong; Yan, Zhiqiang; Zheng, Xiliang; Hu, Liang; Yang, Yongliang; Wang, Jin

    2014-07-01

    In protein-ligand docking, an optimization algorithm is used to find the best binding pose of a ligand against a protein target. This algorithm plays a vital role in determining the docking accuracy. To evaluate the relative performance of different optimization algorithms and provide guidance for real applications, we performed a comparative study on six efficient optimization algorithms, containing two evolutionary algorithm (EA)-based optimizers (LGA, DockDE) and four particle swarm optimization (PSO)-based optimizers (SODock, varCPSO, varCPSO-ls, FIPSDock), which were implemented into the protein-ligand docking program AutoDock. We unified the objective functions by applying the same scoring function, and built a new fitness accuracy as the evaluation criterion that incorporates optimization accuracy, robustness, and efficiency. The varCPSO and varCPSO-ls algorithms show high efficiency with fast convergence speed. However, their accuracy is not optimal, as they cannot reach very low energies. SODock has the highest accuracy and robustness. In addition, SODock shows good performance in efficiency when optimizing drug-like ligands with less than ten rotatable bonds. FIPSDock shows excellent robustness and is close to SODock in accuracy and efficiency. In general, the four PSO-based algorithms show superior performance than the two EA-based algorithms, especially for highly flexible ligands. Our method can be regarded as a reference for the validation of new optimization algorithms in protein-ligand docking.

  1. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  2. Robust kernel-based tracking with multiple subtemplates in vision guidance system.

    PubMed

    Yan, Yuzhuang; Huang, Xinsheng; Xu, Wanying; Shen, Lurong

    2012-01-01

    The mean shift algorithm has achieved considerable success in target tracking due to its simplicity and robustness. However, the lack of spatial information may result in its failure to get high tracking precision. This might be even worse when the target is scale variant and the sequences are gray-levels. This paper presents a novel multiple subtemplates based tracking algorithm for the terminal guidance application. By applying a separate tracker to each subtemplate, it can handle more complicated situations such as rotation, scaling, and partial coverage of the target. The innovations include: (1) an optimal subtemplates selection algorithm is designed, which ensures that the selected subtemplates maximally represent the information of the entire template while having the least mutual redundancy; (2) based on the serial tracking results and the spatial constraint prior to those subtemplates, a Gaussian weighted voting method is proposed to locate the target center; (3) the optimal scale factor is determined by maximizing the voting results among the scale searching layers, which avoids the complicated threshold setting problem. Experiments on some videos with static scenes show that the proposed method greatly improves the tracking accuracy compared to the original mean shift algorithm.

  3. Assessing Predictive Properties of Genome-Wide Selection in Soybeans.

    PubMed

    Xavier, Alencar; Muir, William M; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  4. A novel genomic selection method combining GBLUP and LASSO.

    PubMed

    Li, Hengde; Wang, Jingwei; Bao, Zhenmin

    2015-06-01

    Genetic prediction of quantitative traits is a critical task in plant and animal breeding. Genomic selection is an accurate and efficient method of estimating genetic merits by using high-density genome-wide single nucleotide polymorphisms (SNP). In the framework of linear mixed models, we extended genomic best linear unbiased prediction (GBLUP) by including additional quantitative trait locus (QTL) information that was extracted from high-throughput SNPs by using least absolute shrinkage selection operator (LASSO). GBLUP was combined with three LASSO methods-standard LASSO (SLGBLUP), adaptive LASSO (ALGBLUP), and elastic net (ENGBLUP)-that were used for detecting QTLs, and these QTLs were fitted as fixed effects; the remaining SNPs were fitted using a realized genetic relationship matrix. Simulations performed under distinct scenarios revealed that (1) the prediction accuracy of SLGBLUP was the lowest; (2) the prediction accuracies of ALGBLUP and ENGBLUP were equivalent to or higher than that of GBLUP, except under scenarios in which the number of QTLs was large; and (3) the persistence of prediction accuracy over generations was strongest in the case of ENGBLUP. Building on the favorable computational characteristics of GBLUP, ENGBLUP enables robust modeling and efficient computation to be performed for genomic selection.

  5. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  6. Recent Progress toward Robust Photocathodes

    SciTech Connect

    Mulhollan, G. A.; Bierman, J. C.

    2009-08-04

    RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.

  7. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  8. Interpersonal Deception: V. Accuracy in Deception Detection.

    ERIC Educational Resources Information Center

    Burgoon, Judee K.; And Others

    1994-01-01

    Investigates the influence of several factors on accuracy in detecting truth and deceit. Found that accuracy was much higher on truth than deception, novices were more accurate than experts, accuracy depended on type of deception and whether suspicion was present or absent, suspicion impaired accuracy for experts, and questions strategy…

  9. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org//

  10. Robust Inflation from fibrous strings

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cicoli, M.; de Alwis, S.; Quevedo, F.

    2016-05-01

    Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models—including the key (though often ignored) issue of modulus stabilisation—to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index ns of the form r propto (ns‑1)2 where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r lesssim 0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.

  11. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  12. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  13. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  14. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  15. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  16. Attack robustness of cascading load model in interdependent networks

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Wu, Yuedan; Li, Yun

    2015-08-01

    Considering the weight of a node and the coupled strength of two interdependent nodes in the different networks, we propose a method to assign the initial load of a node and construct a new cascading load model in the interdependent networks. Assuming that a node in one network will fail if its degree is 0 or its dependent node in the other network is removed from the network or the load on it exceeds its capacity, we study the influences of the assortative link (AL) and the disassortative link (DL) patterns between two networks on the robustness of the interdependent networks against cascading failures. For better evaluating the network robustness, from the local perspective of a node we present a new measure to qualify the network resiliency after targeted attacks. We show that the AL patterns between two networks can improve the robust level of the entire interdependent networks. Moreover, we obtain how to efficiently allocate the initial load and select some nodes to be protected so as to maximize the network robustness against cascading failures. In addition, we find that some nodes with the lower load are more likely to trigger the cascading propagation when the distribution of the load is more even, and also give the reasonable explanation. Our findings can help to design the robust interdependent networks and give the reasonable suggestion to optimize the allocation of the protection resources.

  17. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  18. Determining gas-meter accuracy

    SciTech Connect

    Valenti, M.

    1997-03-01

    This article describes how engineers at the Metering Research Facility are helping natural-gas companies improve pipeline efficiency by evaluating and refining the instruments used for measuring and setting prices. Accurate metering of natural gas is more important than ever as deregulation subjects pipeline companies to competition. To help improve that accuracy, the Gas Research Institute (GRI) in Chicago has sponsored the Metering Research Facility (MRF) at the Southwest Research Institute (SWRI) in San Antonio, Tex. The MRF evaluates and improves the performance of orifice, turbine, diaphragm, and ultrasonic meters as well as the gas-sampling methods that pipeline companies use to measure the flow of gas and determine its price.

  19. Improved robust T-wave alternans detectors.

    PubMed

    Meste, O; Janusek, D; Karczmarewicz, S; Przybylski, A; Kania, M; Maciag, A; Maniewski, R

    2015-04-01

    New statistical and spectral detectors, the modified matched pairs t test, the extended spectral method and the modified spectral method, were proposed for T-wave alternans (TWA) detection gaining robustness according to trend and single-frequency interferences. They were compared to classic detectors such as matched pairs t test, unpaired t test, spectral method, generalized likelihood ratio test and estimated TWA amplitude within a simulation framework and applied to real data. The optimal detection threshold was selected by using a full Monte-Carlo simulation where signals, with and without alternans episodes, were corrupted by Gaussian noise with different power and single-frequency interferences with different tones. All the combinations of noise and frequency were selected and repeated 500 times in order to compute probability of detection ([Formula: see text]) and the false alarm probability ([Formula: see text]), providing ROC curves. The study group consisted of 50 patients with implantable cardioverter-defibrillator (age: [Formula: see text]; LVEF: [Formula: see text]), who were paced (ventricular pacing) at 100 bpm. Two-minute recordings were analyzed. The XYZ orthogonal lead system was used. The best performance was reached by using the modified matched pairs t test (in comparison with the spectral method and other reference methods).

  20. Development of an integrated computerized scheme for metaphase chromosome image analysis: a robustness experiment

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Wood, Marc C.; Yuan, Chaowei; Chen, Wei; Liu, Hong

    2008-02-01

    Our integrated computer-aided detection (CAD) scheme includes three basic modules. The first module detects whether a microscopic digital image depicts a metaphase chromosome cell. If a cell is detected, the scheme will justify whether it is analyzable with a decision tree. Once an analyzable cell is detected, the second module is applied to segment individual chromosomes and to compute two important features. Specifically, the scheme utilizes a modified thinning algorithm to identify the medial axis of a chromosome. By tracking perpendicular lines along the medial axis, the scheme computes four feature profiles, identifies centromeres, and assigns polarities of chromosomes based on a set of pre-optimized rules. The third module is followed to classify chromosomes into 24 types. In this module, each chromosome is initially represented by a vector of 31 features. A two-layer classifier with 8 artificial neural networks (ANN) is optimized by a genetic algorithm. A testing chromosome is first classified into one of the seven groups by the ANN in the first layer. Another ANN is then automatically selected from the seven ANNs in the second layer (one for each group) to further classify this chromosome into one of 24 types. To test the performance and robustness of this CAD scheme, we randomly selected and assembled an independent testing dataset. The dataset contains 100 microscopic digital images including 50 analyzable and 50 un-analyzable metphase cells identified by the experts. The centromere location, the corresponding polarity, and karyotype for each individual chromosome were recorded in the "truth" file. The performance of the CAD scheme applied to this image dataset is analyzed and compared with the results in the true file. The assessment accuracies are 93% for the first module, 90.8% for centromere identification and 93.2% for polarity assignment in the second module, over 96% for six chromosome groups and 81.8% for one group in the third module

  1. Conjugate Fabry-Perot cavity pair for improved astro-comb accuracy.

    PubMed

    Li, Chih-Hao; Chang, Guoqing; Glenday, Alexander G; Langellier, Nicholas; Zibrov, Alexander; Phillips, David F; Kärtner, Franz X; Szentgyorgyi, Andrew; Walsworth, Ronald L

    2012-08-01

    We propose a new astro-comb mode-filtering scheme composed of two Fabry-Perot cavities (coined "conjugate Fabry-Perot cavity pair"). Simulations indicate that this new filtering scheme makes the accuracy of astro-comb spectral lines more robust against systematic errors induced by nonlinear processes associated with power-amplifying and spectral-broadening optical fibers.

  2. A robust method for online stereo camera self-calibration in unmanned vehicle system

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Chihara, Nobuhiro; Guo, Tao; Kimura, Nobutaka

    2014-06-01

    Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as

  3. Using Many-Objective Optimization and Robust Decision Making to Identify Robust Regional Water Resource System Plans

    NASA Astrophysics Data System (ADS)

    Matrosov, E. S.; Huskova, I.; Harou, J. J.

    2015-12-01

    Water resource system planning regulations are increasingly requiring potential plans to be robust, i.e., perform well over a wide range of possible future conditions. Robust Decision Making (RDM) has shown success in aiding the development of robust plans under conditions of 'deep' uncertainty. Under RDM, decision makers iteratively improve the robustness of a candidate plan (or plans) by quantifying its vulnerabilities to future uncertain inputs and proposing ameliorations. RDM requires planners to have an initial candidate plan. However, if the initial plan is far from robust, it may take several iterations before planners are satisfied with its performance across the wide range of conditions. Identifying an initial candidate plan is further complicated if many possible alternative plans exist and if performance is assessed against multiple conflicting criteria. Planners may benefit from considering a plan that already balances multiple performance criteria and provides some level of robustness before the first RDM iteration. In this study we use many-objective evolutionary optimization to identify promising plans before undertaking RDM. This is done for a very large regional planning problem spanning the service area of four major water utilities in East England. The five-objective optimization is performed under an ensemble of twelve uncertainty scenarios to ensure the Pareto-approximate plans exhibit an initial level of robustness. New supply interventions include two reservoirs, one aquifer recharge and recovery scheme, two transfers from an existing reservoir, five reuse and five desalination schemes. Each option can potentially supply multiple demands at varying capacities resulting in 38 unique decisions. Four candidate portfolios were selected using trade-off visualization with the involved utilities. The performance of these plans was compared under a wider range of possible scenarios. The most balanced plan was then submitted into the vulnerability

  4. Robust matching algorithm for image mosaic

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Tan, Jiu-bin

    2010-08-01

    In order to improve the matching accuracy and the level of automation for image mosaic, a matching algorithm based on SIFT (Scale Invariant Feature Transform) features is proposed as detailed below. Firstly, according to the result of cursory comparison with the given basal matching threshold, the collection corresponding SIFT features which contains mismatch is obtained. Secondly, after calculating all the ratio of Euclidean distance from the closest neighbor to the distance of the second closest of corresponding features, we select the image coordinates of corresponding SIFT features with the first eight smallest ratios to solve the initial parameters of pin-hole camera model, and then calculate maximum error σ between transformation coordinates and original image coordinates of the eight corresponding features. Thirdly, calculating the scale of the largest original image coordinates of the eight corresponding features to the entire image size, the scale is regarded as control parameter k of matching error threshold. Finally, computing the difference of the transformation coordinates and the original image coordinates of all the features in the collection of features, deleting the corresponding features with difference larger than 3kσ. We can then obtain the exact collection of matching features to solve the parameters for pin-hole camera model. Experimental results indicate that the proposed method is stable and reliable in case of the image having some variation of view point, illumination, rotation and scale. This new method has been used to achieve an excellent matching accuracy on the experimental images. Moreover, the proposed method can be used to select the matching threshold of different images automatically without any manual intervention.

  5. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  6. High accuracy time transfer synchronization

    NASA Astrophysics Data System (ADS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-05-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  7. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  8. Mechanisms of mutational robustness in transcriptional regulation

    PubMed Central

    Payne, Joshua L.; Wagner, Andreas

    2015-01-01

    Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194

  9. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  10. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1991-01-01

    This semi-annual report describes continued progress on the research. Among several approaches in this area of research, our approach to the parametric uncertainties are being matured everyday. This approach deals with real parameter uncertainties which other techniques such as H (sup infinity) optimal control, micron analysis and synthesis, and l(sub 1) optimal control cannot deal. The primary assumption of this approach is that the mathematical models are well obtained so that the most of system uncertainties can be translated into parameter uncertainties of their linear system representations. These uncertainties may be due to modeling, nonlinearity of the physical system, some time-varying parameters, etc. In this report period of research, we are concentrating on implementing a computer aided analysis and design tool based on new results on parametric robust stability. This implementation will help us to reveal further details in this approach.

  11. The structure of robust observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1975-01-01

    Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.

  12. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  13. CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS

    SciTech Connect

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-11-10

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  14. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  15. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  16. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration.

  17. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  18. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  19. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  20. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  1. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  2. Selection of Electronic Resources.

    ERIC Educational Resources Information Center

    Weathers, Barbara

    1998-01-01

    Discusses the impact of electronic resources on collection development; selection of CD-ROMs, (platform, speed, video and sound, networking capability, installation and maintenance); selection of laser disks; and Internet evaluation (accuracy of content, authority, objectivity, currency, technical characteristics). Lists Web sites for evaluating…

  3. Measuring the robustness of link prediction algorithms under noisy environment

    PubMed Central

    Zhang, Peng; Wang, Xiang; Wang, Futian; Zeng, An; Xiao, Jinghua

    2016-01-01

    Link prediction in complex networks is to estimate the likelihood of two nodes to interact with each other in the future. As this problem has applications in a large number of real systems, many link prediction methods have been proposed. However, the validation of these methods is so far mainly conducted in the assumed noise-free networks. Therefore, we still miss a clear understanding of how the prediction results would be affected if the observed network data is no longer accurate. In this paper, we comprehensively study the robustness of the existing link prediction algorithms in the real networks where some links are missing, fake or swapped with other links. We find that missing links are more destructive than fake and swapped links for prediction accuracy. An index is proposed to quantify the robustness of the link prediction methods. Among the twenty-two studied link prediction methods, we find that though some methods have low prediction accuracy, they tend to perform reliably in the “noisy” environment. PMID:26733156

  4. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  5. Pulse oximetry: accuracy of methods of interpreting graphic summaries.

    PubMed

    Lafontaine, V M; Ducharme, F M; Brouillette, R T

    1996-02-01

    Although pulse oximetry has been used to determine the frequency and extent of hemoglobin desaturation during sleep, movement artifact can result in overestimation of desaturation unless valid desaturations can be identified accurately. Therefore, we determined the accuracy of pulmonologists' and technicians' interpretations of graphic displays of desaturation events, derived an objective method for interpreting such events, and validated the method on an independent data set. Eighty-seven randomly selected desaturation events were classified as valid (58) or artifactual (29) based on cardiorespiratory recordings (gold standard) that included pulse waveform and respiratory inductive plethysmography signals. Using oximetry recordings (test method), nine pediatric pulmonologists and three respiratory technicians ("readers") averaged 50 +/- 11% (SD) accuracy for event classification. A single variable, the pulse amplitude modulation range (PAMR) prior to desaturation, performed better in discriminating valid from artifactual events with 76% accuracy (P < 0.05). Following a seminar on oximetry and the use of the PAMR method, the readers' accuracy increased to 73 +/- 2%. In an independent set of 73 apparent desaturation events (74% valid, 26% artifactual), the PAMR method of assessing oximetry graphs yielded 82% accuracy; transcutaneous oxygen tension records confirmed a drop in oxygenation during 49 of 54 (89%) valid desaturation events. In conclusion, the most accurate method (91%) of assessing desaturation events requires recording of the pulse and respiratory waveforms. However, a practical, easy-to-use method of interpreting pulse oximetry recordings achieved 76-82% accuracy, which constitutes a significant improvement from previous subjective interpretations.

  6. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future. PMID:24691198

  7. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  8. Cochrane diagnostic test accuracy reviews.

    PubMed

    Leeflang, Mariska M G; Deeks, Jonathan J; Takwoingi, Yemisi; Macaskill, Petra

    2013-10-07

    In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration.

  9. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  10. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  11. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  12. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  13. A Fast and Robust Ellipse-Detection Method Based on Sorted Merging

    PubMed Central

    Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui

    2014-01-01

    A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661

  14. Comparing the accuracy of quantitative versus qualitative analyses of interim PET to prognosticate Hodgkin lymphoma: a systematic review protocol of diagnostic test accuracy

    PubMed Central

    Procházka, Vít; Klugar, Miloslav; Bachanova, Veronika; Klugarová, Jitka; Tučková, Dagmar; Papajík, Tomáš

    2016-01-01

    Introduction Hodgkin lymphoma is an effectively treated malignancy, yet 20% of patients relapse or are refractory to front-line treatments with potentially fatal outcomes. Early detection of poor treatment responders is crucial for appropriate application of tailored treatment strategies. Tumour metabolic imaging of Hodgkin lymphoma using visual (qualitative) 18-fluorodeoxyglucose positron emission tomography (FDG-PET) is a gold standard for staging and final outcome assessment, but results gathered during the interim period are less accurate. Analysis of continuous metabolic–morphological data (quantitative) FDG-PET may enhance the robustness of interim disease monitoring, and help to improve treatment decision-making processes. The objective of this review is to compare diagnostic test accuracy of quantitative versus qualitative interim FDG-PET in the prognostication of patients with Hodgkin lymphoma. Methods The literature on this topic will be reviewed in a 3-step strategy that follows methods described by the Joanna Briggs Institute (JBI). First, MEDLINE and EMBASE databases will be searched. Second, listed databases for published literature (MEDLINE, Tripdatabase, Pedro, EMBASE, the Cochrane Central Register of Controlled Trials and WoS) and unpublished literature (Open Grey, Current Controlled Trials, MedNar, ClinicalTrials.gov, Cos Conference Papers Index and International Clinical Trials Registry Platform of the WHO) will be queried. Third, 2 independent reviewers will analyse titles, abstracts and full texts, and perform hand search of relevant studies, and then perform critical appraisal and data extraction from selected studies using the DATARI tool (JBI). If possible, a statistical meta-analysis will be performed on pooled sensitivity and specificity data gathered from the selected studies. Statistical heterogeneity will be assessed. Funnel plots, Begg's rank correlations and Egger's regression tests will be used to detect and/or correct publication

  15. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  16. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  17. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  18. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  19. Robust algorithms for anatomic plane primitive detection in MR

    NASA Astrophysics Data System (ADS)

    Dewan, Maneesh; Zhan, Yiqiang; Peng, Zhigang; Zhou, Xiang Sean

    2009-02-01

    One of primary challenges in the medical image data analysis is the ability to handle abnormal, irregular and/or partial cases. In this paper, we present two different robust algorithms towards the goal of automatic planar primitive detection in 3D volumes. The overall algorithm is a bottoms-up approach starting with the anatomic point primitives (or landmarks) detection. The robustness in computing the planar primitives is built in through both a novel consensus-based voting approach, and a random sampling-based weighted least squares regression method. Both these approaches remove inconsistent landmarks and outliers detected in the landmark detection step. Unlike earlier approaches focused towards a particular plane, the presented approach is generic and can be easily adapted to computing more complex primitives such as ROIs or surfaces. To demonstrate the robustness and accuracy of our approach, we present extensive results for automatic plane detection (Mig-Sagittal and Optical Triangle planes) in brain MR-images. In comparison to ground truth, our approach has marginal errors on about 90 patients. The algorithm also works really well under adverse conditions of arbitrary rotation and cropping of the 3D volume. In order to exhibit generalization of the approach, we also present preliminary results on intervertebrae-plane detection for 3D spine MR application.

  20. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  1. Accuracy of the TRUGENE HIV-1 Genotyping Kit

    PubMed Central

    Grant, Robert M.; Kuritzkes, Daniel R.; Johnson, Victoria A.; Mellors, John W.; Sullivan, John L.; Swanstrom, Ronald; D'Aquila, Richard T.; Van Gorder, Mark; Holodniy, Mark; Lloyd, Jr., Robert M.; Reid, Caroline; Morgan, Gillian F.; Winslow, Dean L.

    2003-01-01

    Drug resistance and poor virological responses are associated with well-characterized mutations in the viral reading frames that encode the proteins that are targeted by currently available antiretroviral drugs. An integrated system was developed that includes target gene amplification, DNA sequencing chemistry (TRUGENE HIV-1 Genotyping Kit), and hardware and interpretative software (the OpenGene DNA Sequencing System) for detection of mutations in the human immunodeficiency virus type 1 (HIV-1) protease and reverse transcriptase sequences. The integrated system incorporates reverse transcription-PCR from extracted HIV-1 RNA, a coupled amplification and sequencing step (CLIP), polyacrylamide gel electrophoresis, semiautomated analysis of data, and generation of an interpretative report. To assess the accuracy and robustness of the assay system, 270 coded plasma specimens derived from nine patients were sent to six laboratories for blinded analysis. All specimens contained HIV-1 subtype B viruses. Results of 270 independent assays were compared to “gold standard” consensus sequences of the virus populations determined by sequence analysis of 16 to 20 clones of viral DNA amplicons derived from two independent PCRs using primers not used in the kit. The accuracy of the integrated system for nucleotide base identification was 98.7%, and the accuracy for codon identification at 54 sites associated with drug resistance was 97.6%. In a separate analysis of plasma spiked with infectious molecular clones, the assay reproducibly detected all 72 different drug resistance mutations that were evaluated. There were no significant differences in accuracy between laboratories, between technologists, between kit lots, or between days. This integrated assay system for the detection of HIV-1 drug resistance mutations has a high degree of accuracy and reproducibility in several laboratories. PMID:12682149

  2. A network property necessary for concentration robustness

    PubMed Central

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-01-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications. PMID:27759015

  3. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor

    PubMed Central

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-01-01

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers’ misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors. PMID:27171081

  4. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-01-01

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors. PMID:27171081

  5. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  6. MTC: A Fast and Robust Graph-Based Transductive Learning Method.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin

    2015-09-01

    Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness.

  7. Robust dynamical decoupling sequences for individual-nuclear-spin addressing

    NASA Astrophysics Data System (ADS)

    Casanova, J.; Wang, Z.-Y.; Haase, J. F.; Plenio, M. B.

    2015-10-01

    We propose the use of non-equally-spaced decoupling pulses for high-resolution selective addressing of nuclear spins by a quantum sensor. The analytical model of the basic operating principle is supplemented by detailed numerical studies that demonstrate the high degree of selectivity and the robustness against static and dynamic control-field errors of this scheme. We exemplify our protocol with a nitrogen-vacancy-center-based sensor to demonstrate that it enables the identification of individual nuclear spins that form part of a large spin ensemble.

  8. Food Label Accuracy of Common Snack Foods

    PubMed Central

    Jumpertz, Reiner; Venti, Colleen A; Le, Duc Son; Michaels, Jennifer; Parrington, Shannon; Krakoff, Jonathan; Votruba, Susanne

    2012-01-01

    Nutrition labels have raised awareness of the energetic value of foods, and represent for many a pivotal guideline to regulate food intake. However, recent data have created doubts on label accuracy. Therefore we tested label accuracy for energy and macronutrient content of prepackaged energy-dense snack food products. We measured “true” caloric content of 24 popular snack food products in the U.S. and determined macronutrient content in 10 selected items. Bomb calorimetry and food factors were used to estimate energy content. Macronutrient content was determined according to Official Methods of Analysis. Calorimetric measurements were performed in our metabolic laboratory between April 20th and May 18th and macronutrient content was measured between September 28th and October 7th of 2010. Serving size, by weight, exceeded label statements by 1.2% [median] (25th percentile −1.4, 75th percentile 4.3, p=0.10). When differences in serving size were accounted for, metabolizable calories were 6.8 kcal (0.5, 23.5, p=0.0003) or 4.3% (0.2, 13.7, p=0.001) higher than the label statement. In a small convenience sample of the tested snack foods, carbohydrate content exceeded label statements by 7.7% (0.8, 16.7, p=0.01); however fat and protein content were not significantly different from label statements (−12.8% [−38.6, 9.6], p=0.23; 6.1% [−6.1, 17.5], p=0.32). Carbohydrate content explained 40% and serving size an additional 55% of the excess calories. Among a convenience sample of energy-dense snack foods, caloric content is higher than stated on the nutrition labels, but overall well within FDA limits. This discrepancy may be explained by inaccurate carbohydrate content and serving size. PMID:23505182

  9. Robust satisficing and the probability of survival

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2014-01-01

    Concepts of robustness are sometimes employed when decisions under uncertainty are made without probabilistic information. We present a theorem that establishes necessary and sufficient conditions for non-probabilistic robustness to be equivalent to the probability of satisfying the specified outcome requirements. When this holds, probability is enhanced (or maximised) by enhancing (or maximising) robustness. Two further theorems establish important special cases. These theorems have implications for success or survival under uncertainty. Applications to foraging and finance are discussed.

  10. Robustness enhancement of neurocontroller and state estimator

    NASA Technical Reports Server (NTRS)

    Troudet, Terry

    1993-01-01

    The feasibility of enhancing neurocontrol robustness, through training of the neurocontroller and state estimator in the presence of system uncertainties, is investigated on the example of a multivariable aircraft control problem. The performance and robustness of the newly trained neurocontroller are compared to those for an existing neurocontrol design scheme. The newly designed dynamic neurocontroller exhibits a better trade-off between phase and gain stability margins, and it is significantly more robust to degradations of the plant dynamics.

  11. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  12. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  13. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  14. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.

    2015-02-01

    This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  15. Robust Fixed-Structure Controller Synthesis

    NASA Technical Reports Server (NTRS)

    Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)

    2000-01-01

    The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.

  16. Molecular mechanisms of robustness in plants

    PubMed Central

    Lempe, Janne; Lachowiec, Jennifer; Sullivan, Alessandra. M.; Queitsch, Christine

    2012-01-01

    Robustness, the ability of organisms to buffer phenotypes against perturbations, has drawn renewed interest among developmental biologists and geneticists. A growing body of research supports an important role of robustness in the genotype to phenotype translation, with far- reaching implications for evolutionary processes and disease susceptibility. Like for animals and fungi, plant robustness is a function of genetic network architecture. Most perturbations are buffered; however, perturbation of network hubs destabilizes many traits. Here, we review recent advances in identifying molecular robustness mechanisms in plants that have been enabled by a combination of classical genetics and population genetics with genome-scale data. PMID:23279801

  17. Robust Hypothesis Testing with alpha -Divergence

    NASA Astrophysics Data System (ADS)

    Gul, Gokhan; Zoubir, Abdelhak M.

    2016-09-01

    A robust minimax test for two composite hypotheses, which are determined by the neighborhoods of two nominal distributions with respect to a set of distances - called $\\alpha-$divergence distances, is proposed. Sion's minimax theorem is adopted to characterize the saddle value condition. Least favorable distributions, the robust decision rule and the robust likelihood ratio test are derived. If the nominal probability distributions satisfy a symmetry condition, the design procedure is shown to be simplified considerably. The parameters controlling the degree of robustness are bounded from above and the bounds are shown to be resulting from a solution of a set of equations. The simulations performed evaluate and exemplify the theoretical derivations.

  18. Robust fluidic connections to freestanding microfluidic hydrogels

    PubMed Central

    Baer, Bradly B.; Larsen, Taylor S. H.

    2015-01-01

    Biomimetic scaffolds approaching physiological scale, whose size and large cellular load far exceed the limits of diffusion, require incorporation of a fluidic means to achieve adequate nutrient/metabolite exchange. This need has driven the extension of microfluidic technologies into the area of biomaterials. While construction of perfusable scaffolds is essentially a problem of microfluidic device fabrication, functional implementation of free-standing, thick-tissue constructs depends upon successful integration of external pumping mechanisms through optimized connective assemblies. However, a critical analysis to identify optimal materials/assembly components for hydrogel substrates has received little focus to date. This investigation addresses this issue directly by evaluating the efficacy of a range of adhesive and mechanical fluidic connection methods to gelatin hydrogel constructs based upon both mechanical property analysis and cell compatibility. Results identify a novel bioadhesive, comprised of two enzymatically modified gelatin compounds, for connecting tubing to hydrogel constructs that is both structurally robust and non-cytotoxic. Furthermore, outcomes from this study provide clear evidence that fluidic interconnect success varies with substrate composition (specifically hydrogel versus polydimethylsiloxane), highlighting not only the importance of selecting the appropriately tailored components for fluidic hydrogel systems but also that of encouraging ongoing, targeted exploration of this issue. The optimization of such interconnect systems will ultimately promote exciting scientific and therapeutic developments provided by microfluidic, cell-laden scaffolds. PMID:26045731

  19. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  20. Robust micromachining of compliant mechanisms using silicides

    NASA Astrophysics Data System (ADS)

    Khosraviani, Kourosh; Leung, Albert M.

    2013-01-01

    We introduce an innovative sacrificial surface micromachining process that enhances the mechanical robustness of freestanding microstructures and compliant mechanisms. This process facilitates the fabrication, and improves the assembly yield of the out-of-plane micro sensors and actuators. Fabrication of a compliant mechanism using conventional sacrificial surface micromachining results in a non-planar structure with a step between the structure and its anchor. During mechanism actuation or assembly, stress accumulation at the structure step can easily exceed the yield strength of the material and lead to the structure failure. Our process overcomes this topographic issue by virtually eliminating the step between the structure and its anchor, and achieves planarization without using chemical mechanical polishing. The process is based on low temperature and post-CMOS compatible nickel silicide technology. We use a layer of amorphous silicon (a-Si) as a sacrificial layer, which is locally converted to nickel silicide to form the anchors. High etch selectivity between silicon and nickel silicide in the xenon difluoride gas (sacrificial layer etchant) enables us to use the silicide to anchor the structures to the substrate. The formed silicide has the same thickness as the sacrificial layer; therefore, the structure is virtually flat. The maximum measured step between the anchor and the sacrificial layer is about 10 nm on a 300 nm thick sacrificial layer.

  1. How Robust Are “Isolation with Migration” Analyses to Violations of the IM Model? A Simulation Study

    PubMed Central

    Strasburg, Jared L.; Rieseberg, Loren H.

    2010-01-01

    Methods developed over the past decade have made it possible to estimate molecular demographic parameters such as effective population size, divergence time, and gene flow with unprecedented accuracy and precision. However, they make simplifying assumptions about certain aspects of the species’ histories and the nature of the genetic data, and it is not clear how robust they are to violations of these assumptions. Here, we use simulated data sets to examine the effects of a number of violations of the “Isolation with Migration” (IM) model, including intralocus recombination, population structure, gene flow from an unsampled species, linkage among loci, and divergent selection, on demographic parameter estimates made using the program IMA. We also examine the effect of having data that fit a nucleotide substitution model other than the two relatively simple models available in IMA. We find that IMA estimates are generally quite robust to small to moderate violations of the IM model assumptions, comparable with what is often encountered in real-world scenarios. In particular, population structure within species, a condition encountered to some degree in virtually all species, has little effect on parameter estimates even for fairly high levels of structure. Likewise, most parameter estimates are robust to significant levels of recombination when data sets are pared down to apparently nonrecombining blocks, although substantial bias is introduced to several estimates when the entire data set with recombination is included. In contrast, a poor fit to the nucleotide substitution model can result in an increased error rate, in some cases due to a predictable bias and in other cases due to an increase in variance in parameter estimates among data sets simulated under the same conditions. PMID:19793831

  2. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  3. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  4. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  5. Robust bone detection in ultrasound using combined strain imaging and envelope signal power detection.

    PubMed

    Hussain, Mohammad Arafat; Hodgson, Antony; Abugharbieh, Rafeef

    2014-01-01

    Bone localization in ultrasound (US) remains challenging despite encouraging advances. Current methods, e.g. local image phase-based feature analysis, showed promising results but remain reliant on delicate parameter selection processes and prone to errors at confounding soft tissue interfaces of similar appearance to bone interfaces. We propose a different approach combining US strain imaging and envelope power detection at each radio-frequency (RF) sample. After initial estimation of strain and envelope power maps, we modify their dynamic ranges into a modified strain map (MSM) and a modified envelope map (MEM) that we subsequently fuse into a single combined map that we show corresponds robustly to actual bone boundaries. Our quantitative results demonstrate a marked reduction in false positive responses at soft tissue interfaces and an increase in bone delineation accuracy. Comparisons to the state-of-the-art on a finite-element-modelling (FEM) phantom and fiducial-based experimental phantom show an average improvement in mean absolute error (MAE) between actual and estimated bone boundaries of 32% and 14%, respectively. We also demonstrate an average reduction in false bone responses of 87% and 56%, respectively. Finally, we qualitatively validate on clinical in vivo data of the human radius and ulna bones, and demonstrate similar improvements to those observed on phantoms.

  6. Robust Quantum-Based Interatomic Potentials for Multiscale Modeling in Transition Metals

    SciTech Connect

    Moriarty, J A; Benedict, L X; Glosli, J N; Hood, R Q; Orlikowski, D A; Patel, M V; Soderlind, P; Streitz, F H; Tang, M; Yang, L H

    2005-09-27

    First-principles generalized pseudopotential theory (GPT) provides a fundamental basis for transferable multi-ion interatomic potentials in transition metals and alloys within density-functional quantum mechanics. In the central bcc metals, where multi-ion angular forces are important to materials properties, simplified model GPT or MGPT potentials have been developed based on canonical d bands to allow analytic forms and large-scale atomistic simulations. Robust, advanced-generation MGPT potentials have now been obtained for Ta and Mo and successfully applied to a wide range of structural, thermodynamic, defect and mechanical properties at both ambient and extreme conditions. Selected applications to multiscale modeling discussed here include dislocation core structure and mobility, atomistically informed dislocation dynamics simulations of plasticity, and thermoelasticity and high-pressure strength modeling. Recent algorithm improvements have provided a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy and extension to f-electron actinide metals, an order of magnitude increase in computational speed for dynamic simulations, and the development of temperature-dependent potentials.

  7. Increasing the robustness of phenological models for Vitis vinifera cv. Chardonnay.

    PubMed

    Caffarra, Amelia; Eccel, Emanuele

    2010-05-01

    Phenological models are important tools for planning viticultural practices in the short term and for projecting the impact of climate change on grapevine (Vitis vinifera) in the long term. However, the difficulties in obtaining phenological models which provide accurate predictions on a regional scale prevent them from being exploited to their full potential. The aim of this work was to obtain a robust phenological model for V. vinifera cv. Chardonnay. During calibration of the sub-models for budburst, flowering and veraison we implemented a series of measures to prevent overfitting and to give greater physiological meaning to the models. Among these were the use of experimental information on the response of Chardonnay to forcing temperatures, restriction of parameter space into physiologically meaningful limits prior to calibration, and simplification of the previously selected sub-models. The resulting process-based model had good internal validity and a good level of accuracy in predicting phenological events from external datasets. Model performance was especially high for the prediction of flowering and veraison, and comparison with other models confirmed it as a better predictor of phenology, even in extremely warm years. The modelling study highlighted a different phenological behaviour at the only mountain station, Cembra. We hypothesised that phenotypical plasticity could lead to growth rates adapting to a lower mean temperature, a mechanism not usually accounted for by phenological models. PMID:19937456

  8. A Robust Deep Model for Improved Classification of AD/MCI Patients.

    PubMed

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-09-01

    Accurate classification of Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of a particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight coadaptation, which is a typical cause of overfitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multitask learning strategy into the deep learning framework. We applied the proposed method to the ADNI dataset, and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  9. A Robust Deep Model for Improved Classification of AD/MCI Patients

    PubMed Central

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-01-01

    Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  10. Quality--a radiology imperative: interpretation accuracy and pertinence.

    PubMed

    Lee, Joseph K T

    2007-03-01

    Physicians as a group have neither consistently defined nor systematically measured the quality of medical practice. To referring clinicians and patients, a good radiologist is one who is accessible, recommends appropriate imaging studies, and provides timely consultation and reports with high interpretation accuracy. For determining the interpretation accuracy of cases with pathologic or surgical proof, the author proposes tracking data on positive predictive value, disease detection rates, and abnormal interpretation rates for individual radiologists. For imaging studies with no pathologic proof or adequate clinical follow-up, the author proposes measuring the concordance and discordance of the interpretations within a peer group. The monitoring of interpretation accuracy can be achieved through periodic imaging, pathologic correlation, regular peer review of randomly selected cases, or subscription to the ACR's RADPEER system. Challenges facing the implementation of an effective peer-review system include physician time, subjectivity in assessing discordant interpretations, lengthy and equivocal interpretations, and the potential misassignment of false-positive interpretations.

  11. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  12. On the Robustness Properties of M-MRAC

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram

    2012-01-01

    The paper presents performance and robustness analysis of the modified reference model MRAC (model reference adaptive control) or M-MRAC in short, which differs from the conventional MRAC systems by feeding back the tracking error to the reference model. The tracking error feedback gain in concert with the adaptation rate provides an additional capability to regulate not only the transient performance of the tracking error, but also the transient performance of the control signal. This differs from the conventional MRAC systems, in which we have only the adaptation rate as a tool to regulate just the transient performance of the tracking error. It is shown that the selection of the feedback gain and the adaptation rate resolves the tradeoff between the robustness and performance in the sense that the increase in the feedback gain improves the behavior of the adaptive control signal, hence improves the systems robustness to time delays (or unmodeled dynamics), while increasing the adaptation rate improves the tracking performance or systems robustness to parametric uncertainties and external disturbances.

  13. Advanced robust design optimization of FRP sandwich floor panels

    NASA Astrophysics Data System (ADS)

    Awad, Z. K.; Gonzalez, F.; Aravinthan, T.

    2010-06-01

    FRP composite is now being used in the construction of main structural elements, such as the FRP sandwich panel for flooring system and bridges. The objective of this research is to use multi-objective optimization and robust design techniques to minimize the weight of the FRP sandwich floor panel design as well as maximizing the natural frequency. An Australian manufactures has invented a new FRP composite panel suitable for civil engineering constructions. This research work aims to develop an optimal design of structural fibre composite sandwich floor panel by coupling a Finite Element FE and robust design optimization method. The design variables are the skin plies thickness and the core thickness as a robust variable. Results indicate that there is a trade-off between the objectives. The robust design technique is used then to select a set of candidate geometry, which has a high natural frequency, low weight and low standard deviation. The design simulation was formulated by depending on the EUROCOMP standard design constraints.

  14. Mutational Robustness of Morphological Traits in the Ciliate Tetrahymena thermophila

    PubMed Central

    Long, Hongan; Zufall, Rebecca A.

    2014-01-01

    Ciliate nuclear architecture, in particular the sequestration of a transcriptionally silent germline genome, allows for the accumulation of mutations that are "hidden" from selection during many rounds of asexual reproduction. After sexual conjugation, these mutations are expressed, potentially resulting in highly variable phenotypes. Morphological traits are widely used in ciliate taxonomy, however, the extent to which the values of these traits are robust to change in the face of mutation is largely unknown. In this study, we examine the effects of mutations accumulated in the germline genome to test the mutational robustness of four traits commonly used in ciliate morphological taxonomy (number of somatic kineties, number of post-oral kineties, macronuclear size, and cell size). We find that the number of post-oral kineties is robust to mutation, confirming that it should be preferentially used in taxonomy. By contrast, we find that, as in other unicellular and multicellular species, cell/macronucleus size changes in response to mutation. Thus, we argue that cell/macronucleus sizes, which are widely used in taxonomy, should be treated cautiously for species identification. Finally, we find evidence of correlations between cell and macronucleus sizes and fitness, suggesting possible mutational pleiotropy. This study demonstrates the importance of, and methods for, determining mutational robustness to guide morphological taxonomy in ciliates. PMID:25227613

  15. Closed-loop and robust control of quantum systems.

    PubMed

    Chen, Chunlin; Wang, Lin-Cheng; Wang, Yuanlong

    2013-01-01

    For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control) have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA), and reinforcement learning (RL) methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H(∞) control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  16. Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.

    PubMed

    Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun

    2016-07-11

    A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°. PMID:27410853

  17. Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.

    PubMed

    Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun

    2016-07-11

    A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°.

  18. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  19. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  20. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  1. Noise and Robustness in Phyllotaxis

    PubMed Central

    Mirabet, Vincent; Besnard, Fabrice; Vernoux, Teva; Boudaoud, Arezki

    2012-01-01

    A striking feature of vascular plants is the regular arrangement of lateral organs on the stem, known as phyllotaxis. The most common phyllotactic patterns can be described using spirals, numbers from the Fibonacci sequence and the golden angle. This rich mathematical structure, along with the experimental reproduction of phyllotactic spirals in physical systems, has led to a view of phyllotaxis focusing on regularity. However all organisms are affected by natural stochastic variability, raising questions about the effect of this variability on phyllotaxis and the achievement of such regular patterns. Here we address these questions theoretically using a dynamical system of interacting sources of inhibitory field. Previous work has shown that phyllotaxis can emerge deterministically from the self-organization of such sources and that inhibition is primarily mediated by the depletion of the plant hormone auxin through polarized transport. We incorporated stochasticity in the model and found three main classes of defects in spiral phyllotaxis – the reversal of the handedness of spirals, the concomitant initiation of organs and the occurrence of distichous angles – and we investigated whether a secondary inhibitory field filters out defects. Our results are consistent with available experimental data and yield a prediction of the main source of stochasticity during organogenesis. Our model can be related to cellular parameters and thus provides a framework for the analysis of phyllotactic mutants at both cellular and tissular levels. We propose that secondary fields associated with organogenesis, such as other biochemical signals or mechanical forces, are important for the robustness of phyllotaxis. More generally, our work sheds light on how a target pattern can be achieved within a noisy background. PMID:22359496

  2. Development of a robust, sensitive and selective liquid chromatography-tandem mass spectrometry assay for the quantification of the novel macrocyclic peptide kappa opioid receptor antagonist [D-Trp]CJ-15,208 in plasma and application to an initial pharmacokinetic study.

    PubMed

    Khaliq, Tanvir; Williams, Todd D; Senadheera, Sanjeewa N; Aldrich, Jane V

    2016-08-15

    Selective kappa opioid receptor (KOR) antagonists may have therapeutic potential as treatments for substance abuse and mood disorders. Since [D-Trp]CJ-15,208 (cyclo[Phe-d-Pro-Phe-d-Trp]) is a novel potent KOR antagonist in vivo, it is imperative to evaluate its pharmacokinetic properties to assist the development of analogs as potential therapeutic agents, necessitating the development and validation of a quantitative method for determining its plasma levels. A method for quantifying [D-Trp]CJ-15,208 was developed employing high performance liquid chromatography-tandem mass spectrometry in mouse plasma. Sample preparation was accomplished through a simple one-step protein precipitation method with acetonitrile, and [D-Trp]CJ-15,208 analyzed following HPLC separation on a Hypersil BDS C8 column. Multiple reaction monitoring (MRM), based on the transitions m/z 578.1→217.1 and 245.0, was specific for [D-Trp]CJ-15,208, and MRM based on the transition m/z 566.2→232.9 was specific for the internal standard without interference from endogenous substances in blank mouse plasma. The assay was linear over the concentration range 0.5-500ng/mL with a mean r(2)=0.9987. The mean inter-day accuracy and precision for all calibration standards were 93-118% and 8.9%, respectively. The absolute recoveries were 85±6% and 81±9% for [D-Trp]CJ-15,208 and the internal standard, respectively. The analytical method had excellent sensitivity with a lower limit of quantification of 0.5ng/mL using a sample volume of 20μL. The method was successfully applied to an initial pharmacokinetic study of [D-Trp]CJ-15,208 following intravenous administration to mice. PMID:27318293

  3. Design for robustness of unique, multi-component engineering systems

    NASA Astrophysics Data System (ADS)

    Shelton, Kenneth A.

    2007-12-01

    design concept. These allele values are unitless themselves, but map to both configuration descriptions and attribute values. The Value Distance and Component Distance are metrics that measure the relative differences between two design concepts using the allele values, and all differences in a population of design concepts are calculated relative to a reference design, called the "base design". The base design is the top-ranked member of the population in weighted terms of robustness and performance. Robustness is determined based on the change in multi-objective performance as Value Distance and Component Distance (and thus differences in design) increases. It is assessed as acceptable if differences in design configurations up to specified tolerances result in performance changes that remain within a specified performance range. The design configuration difference tolerances and performance range together define the designer's risk management preferences for the final design concepts. Additionally, a complementary visualization capability was developed, called the "Design Solution Topography". This concept allows the visualization of a population of design concepts, and is a 3-axis plot where each point represents an entire design concept. The axes are the Value Distance, Component Distance and Performance Objective. The key benefit of the Design Solution Topography is that it allows the designer to visually identify and interpret the overall robustness of the current population of design concepts for a particular performance objective. In a multi-objective problem, each performance objective has its own Design Solution Topography view. These new concepts are implemented in an evolutionary computation-based conceptual designing method called the "Design for Robustness Method" that produces robust design concepts. The design procedures associated with this method enable designers to evaluate and ensure robustness in selected designs that also perform within a desired

  4. Factors affecting the accuracy of chest compression depth estimation

    PubMed Central

    Kang, Jung Hee; Cha, Won Chul; Chae, Minjung Kathy; Park, Hang A; Hwang, Sung Yeon; Jin, Sang Chan; Lee, Tae Rim; Shin, Tae Gun; Sim, Min Seob; Jo, Ik Joon; Song, Keun Jeong; Rhee, Joong Eui; Jeong, Yeon Kwon

    2014-01-01

    Objective We aimed to estimate the accuracy of visual estimation of chest compression depth and identify potential factors affecting accuracy. Methods This simulation study used a basic life support mannequin, the Ambu man. We recorded chest compression with 7 different depths from 1 to 7 cm. Each video clip was recorded for a cycle of compression. Three different viewpoints were used to record the video. After filming, 25 clips were randomly selected. Health care providers in an emergency department were asked to estimate the depth of compressions while watching the selected video clips. Examiner determinants such as experience and cardiopulmonary resuscitation training and environment determinants such as the location of the camera (examiner) were collected and analyzed. An estimated depth was considered correct if it was consistent with the one recorded. A multivariate analysis predicting the accuracy of compression depth estimation was performed. Results Overall, 103 subjects were enrolled in the study; 42 (40.8%) were physicians, 56 (54.4%) nurses, and 5 (4.8%) emergency medical technicians. The mean accuracy was 0.89 (standard deviation, 0.76). Among examiner determinants, only subjects’ occupation and clinical experience showed significant association with outcome (P=0.03 and P=0.08, respectively). All environmental determinants showed significant association with the outcome (all P<0.001). Multivariate analysis showed that accuracy rate was significantly associated with occupation, camera position, and compression depth. Conclusions The accuracy rate of chest compression depth estimation was 0.89 and was significantly related with examiner’s occupation, camera view position, and compression depth.

  5. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  6. Accuracy of TCP performance models

    NASA Astrophysics Data System (ADS)

    Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.

    2001-07-01

    Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.

  7. Machine tool accuracy characterization workshops. Final report, May 5, 1992--November 5 1993

    SciTech Connect

    1995-01-06

    The ability to assess the accuracy of machine tools is required by both tool builders and users. Builders must have this ability in order to predict the accuracy capability of a machine tool for different part geometry`s, to provide verifiable accuracy information for sales purposes, and to locate error sources for maintenance, troubleshooting, and design enhancement. Users require the same ability in order to make intelligent choices in selecting or procuring machine tools, to predict component manufacturing accuracy, and to perform maintenance and troubleshooting. In both instances, the ability to fully evaluate the accuracy capabilities of a machine tool and the source of its limitations is essential for using the tool to its maximum accuracy and productivity potential. This project was designed to transfer expertise in modern machine tool accuracy testing methods from LLNL to US industry, and to educate users on the use and application of emerging standards for machine tool performance testing.

  8. The Utility of Robust Means in Statistics

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…

  9. Robust Hope and Teacher Education Policy

    ERIC Educational Resources Information Center

    Sawyer, Wayne; Singh, Michael; Woodrow, Christine; Downes, Toni; Johnston, Christine; Whitton, Diana

    2007-01-01

    The research question for this paper is: How can we mobilise robust hope in the analysis of teacher education policy? Specifically, this paper asks how a robust hope framework might speak to the "Top of the Class," a report into teacher education by the Australian House of Representatives Standing Committee on Education and Vocational Training.

  10. Hierarchical feature selection for erythema severity estimation

    NASA Astrophysics Data System (ADS)

    Wang, Li; Shi, Chenbo; Shu, Chang

    2014-10-01

    At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis [1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4]. This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients' images with various kinds of erythema. Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.

  11. Accuracy of GIPSY PPP from a denser network

    NASA Astrophysics Data System (ADS)

    Gokhan Hayal, Adem; Ugur Sanli, Dogan

    2015-04-01

    Researchers need to know about the accuracy of GPS for the planning of their field survey and hence to obtain reliable positions as well as deformation rates. Geophysical applications such as monitoring of development of a fault creep or of crustal motion for global sea level rise studies necessitate the use of continuous GPS whereas applications such as determining co-seismic displacements where permanent GPS sites are sparsely scattered require the employment of episodic campaigns. Recently, real time applications of GPS in relation to the early prediction of earthquakes and tsunamis are in concern. Studying the static positioning accuracy of GPS has been of interest to researchers for more than a decade now. Various software packages and modeling strategies have been tested so far. Relative positioning accuracy was compared with PPP accuracy. For relative positioning, observing session duration and network geometry of reference stations appear to be the dominant factors on GPS accuracy whereas observing session duration seems to be the only factor influencing the PPP accuracy. We believe that latest developments concerning the accuracy of static GPS from well-established software will form a basis for the quality of GPS field works mentioned above especially for real time applications which are referred to more frequently nowadays. To assess the GPS accuracy, conventionally some 10 to 30 regionally or globally scattered networks of GPS stations are used. In this study, we enlarge the size of GPS network up to 70 globally scattered IGS stations to observe the changes on our previous accuracy modeling which employed only 13 stations. We use the latest version 6.3 of GIPSY/OASIS II software and download the data from SOPAC archives. Noting the effect of the ionosphere on our previous accuracy modeling, here we selected the GPS days through which the k-index values are lower than 4. This enabled us to extend the interval of observing session duration used for the

  12. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  13. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  14. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  15. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  16. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  17. Environmental change makes robust ecological networks fragile

    PubMed Central

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers' tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems. PMID:27511722

  18. Evaluating efficiency and robustness in cilia design

    NASA Astrophysics Data System (ADS)

    Guo, Hanliang; Kanso, Eva

    2016-03-01

    Motile cilia are used by many eukaryotic cells to transport flow. Cilia-driven flows are important to many physiological functions, yet a deep understanding of the interplay between the mechanical structure of cilia and their physiological functions in healthy and diseased conditions remains elusive. To develop such an understanding, one needs a quantitative framework to assess cilia performance and robustness when subject to perturbations in the cilia apparatus. Here we link cilia design (beating patterns) to function (flow transport) in the context of experimentally and theoretically derived cilia models. We particularly examine the optimality and robustness of cilia design. Optimality refers to efficiency of flow transport, while robustness is defined as low sensitivity to variations in the design parameters. We find that suboptimal designs can be more robust than optimal ones. That is, designing for the most efficient cilium does not guarantee robustness. These findings have significant implications on the understanding of cilia design in artificial and biological systems.

  19. Environmental change makes robust ecological networks fragile.

    PubMed

    Strona, Giovanni; Lafferty, Kevin D

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers' tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host-parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems. PMID:27511722

  20. Environmental change makes robust ecological networks fragile

    USGS Publications Warehouse

    Strona, Giovanni; Lafferty, Kevin D.

    2016-01-01

    Complex ecological networks appear robust to primary extinctions, possibly due to consumers’ tendency to specialize on dependable (available and persistent) resources. However, modifications to the conditions under which the network has evolved might alter resource dependability. Here, we ask whether adaptation to historical conditions can increase community robustness, and whether such robustness can protect communities from collapse when conditions change. Using artificial life simulations, we first evolved digital consumer-resource networks that we subsequently subjected to rapid environmental change. We then investigated how empirical host–parasite networks would respond to historical, random and expected extinction sequences. In both the cases, networks were far more robust to historical conditions than new ones, suggesting that new environmental challenges, as expected under global change, might collapse otherwise robust natural ecosystems.

  1. Evaluating efficiency and robustness in cilia design.

    PubMed

    Guo, Hanliang; Kanso, Eva

    2016-03-01

    Motile cilia are used by many eukaryotic cells to transport flow. Cilia-driven flows are important to many physiological functions, yet a deep understanding of the interplay between the mechanical structure of cilia and their physiological functions in healthy and diseased conditions remains elusive. To develop such an understanding, one needs a quantitative framework to assess cilia performance and robustness when subject to perturbations in the cilia apparatus. Here we link cilia design (beating patterns) to function (flow transport) in the context of experimentally and theoretically derived cilia models. We particularly examine the optimality and robustness of cilia design. Optimality refers to efficiency of flow transport, while robustness is defined as low sensitivity to variations in the design parameters. We find that suboptimal designs can be more robust than optimal ones. That is, designing for the most efficient cilium does not guarantee robustness. These findings have significant implications on the understanding of cilia design in artificial and biological systems. PMID:27078459

  2. Robust whole-brain segmentation: application to traumatic brain injury.

    PubMed

    Ledig, Christian; Heckemann, Rolf A; Hammers, Alexander; Lopez, Juan Carlos; Newcombe, Virginia F J; Makropoulos, Antonios; Lötjönen, Jyrki; Menon, David K; Rueckert, Daniel

    2015-04-01

    We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called "Multi-Atlas Label Propagation with Expectation-Maximisation based refinement" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to

  3. Assessing the accuracy of Landsat Thematic Mapper classification using double sampling

    USGS Publications Warehouse

    Kalkhan, M.A.; Reich, R.M.; Stohlgren, T.J.

    1998-01-01

    Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Moutnain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5% and 32.5%, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6% and 45.6%, respectively.Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Mountain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5 per cent and 32.5 per cent, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6 per cent and 45.6 per cent, respectively.

  4. Accuracy in determining voice source parameters

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2014-11-01

    The paper addresses the accuracy of an approximate solution to the inverse problem of retrieving the shape of a voice source from a speech signal for a known signal-to-noise ratio (SNR). It is shown that if the source is found as a function of time with the A.N. Tikhonov regularization method, the accuracy of the found approximation is worse than the accuracy of speech signal recording by an order of magnitude. In contrast, adequate parameterization of the source ensures approximate solution accuracy comparable with the accuracy of the problem data. A corresponding algorithm is considered. On the basis of linear (in terms of data errors) estimates of approximate parametric solution accuracy, parametric models with the best accuracy can be chosen. This comparison has been carried out for the known voice source models, i.e., model [17] and the LF model [18]. The advantages of the latter are shown. Thus, for SNR = 40 dB, the relative accuracy of an approximate solution found with this algorithm is about 1% for the LF model and about 2% for model [17] as compared to an accuracy of 7-8% in the regularization method. The role of accuracy estimates found in speaker identification problems is discussed.

  5. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, J.; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  6. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  7. In silico predicted structural and functional robustness of piscine steroidogenesis.

    PubMed

    Hala, D; Huggett, D B

    2014-03-21

    Assessments of metabolic robustness or susceptibility are inherently dependent on quantitative descriptions of network structure and associated function. In this paper a stoichiometric model of piscine steroidogenesis was constructed and constrained with productions of selected steroid hormones. Structural and flux metrics of this in silico model were quantified by calculating extreme pathways and optimal flux distributions (using linear programming). Extreme pathway analysis showed progestin and corticosteroid synthesis reactions to be highly participant in extreme pathways. Furthermore, reaction participation in extreme pathways also fitted a power law distribution (degree exponent γ=2.3), which suggested that progestin and corticosteroid reactions act as 'hubs' capable of generating other functionally relevant pathways required to maintain steady-state functionality of the network. Analysis of cofactor usage (O2 and NADPH) showed progestin synthesis reactions to exhibit high robustness, whereas estrogen productions showed highest energetic demands with low associated robustness to maintain such demands. Linear programming calculated optimal flux distributions showed high heterogeneity of flux values with a near-random power law distribution (degree exponent γ≥2.7). Subsequently, network robustness was tested by assessing maintenance of metabolite flux-sum subject to targeted deletions of rank-ordered (low to high metric) extreme pathway participant and optimal flux reactions. Network robustness was susceptible to deletions of extreme pathway participant reactions, whereas minimal impact of high flux reaction deletion was observed. This analysis shows that the steroid network is susceptible to perturbation of structurally relevant (extreme pathway) reactions rather than those carrying high flux. PMID:24333207

  8. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  9. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  10. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  11. Towards Robust Discontinuous Galerkin Methods for General Relativistic Neutrino Radiation Transport

    NASA Astrophysics Data System (ADS)

    Endeve, E.; Hauck, C. D.; Xing, Y.; Mezzacappa, A.

    2015-10-01

    With an eye towards simulating neutrino transport in core-collapse supernovae, we have developed a conservative, robust, and high-order numerical method for solving the general relativistic phase space advection problem in stationary spacetimes. The method achieves high-order accuracy using Discontinuous Galerkin discretization and Runge-Kutta time integration. For robustness, care is taken to ensure that the physical bounds on the phase space distribution function are preserved; i.e., f ∈ [0,1]. We briefly describe the bound-preserving scheme, and present results from numerical experiments in spherical symmetry adopting the Schwarzschild metric, which demonstrate that the method preserves the bounds on the distribution function.

  12. A robust search paradigm with Enhanced Vine Creeping Optimization

    NASA Astrophysics Data System (ADS)

    Young, C. N.; Le Brese, C.; Zou, J. J.; Leo, C. J.

    2013-02-01

    In order to overcome a worst case scenario for a generalized evolutionary search, which is realized by assuming that conservation of information (COI) holds true, a robust search paradigm is explored building ideas based upon the Enhanced Vine Creeping Optimization (EVCO) algorithm. The proposed algorithm is a modular framework encompassing an archive, a global search and a local search module. The modular structure enables EVCO to serve not only as a stand-alone global optimization algorithm, but importantly as a framework which provides feedback metrics from the performance of a particular combination of search heuristics on different classes of problems. It is this feature of EVCO that provides the foundation of the proposed robust search paradigm. The new algorithm shows significantly better performance than its predecessor, VCO, and eight state-of-the-art evolutionary algorithms placing first or equal first in 10 out of 14 benchmark tests, while naturally providing metric information to assist in tackling the algorithm selection problem.

  13. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  14. Robust vehicle detection for highway surveillance via rear-view monitoring

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Yeh, Chia-Hung; Kuo, C.-C. J.

    2003-11-01

    Vision-based highway monitoring systems play an important role in transportation management and services owing to their powerful ability to extract a variety of information. Detection accuracy of vision-based systems is however sensitive to environmental factors such as lighting, shadow and weather conditions, and it is still a challenging problem to maintain detection robustness at all time. In this research, we present a novel method to enhance detection and tracking accuracy at the nighttime based on rear-view monitoring. In the meanwhile, a method is proposed to improve the background detection and extraction, which usually serves as the first step to moving object region detection. Finally, the effectiveness of the rear-view technique will be analyzed. We compare the tracking accuracy between the front-view and the rear-view techniques, and show that the proposed system can achieve higher detection accuracy at nighttime.

  15. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  16. Random sampler M-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks.

    PubMed

    El-Melegy, Moumen T

    2013-07-01

    This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.

  17. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  18. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    PubMed Central

    He, Shi-wei; Song, Rui; Sun, Yang; Li, Hao-dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable. PMID:25435867

  19. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc.

  20. Accuracy in prescriptions compounded by pharmacy students.

    PubMed

    Shrewsbury, R P; Deloatch, K H

    1998-01-01

    Most compounded prescriptions are not analyzed to determine the accuracy of the employed instruments and procedures. The assumption is that the compounded prescription will be +/- 5% the labeled claim. Two classes of School of Pharmcacy students who received repeated instruction and supervision on proper compounding techniques and procedures were assessed to determine their accuracy of compounding a diphenhydramine hydrochloride prescription. After two attempts, only 62% to 68% of the students could compound the prescription within +/- 5% the labeled claim; but 84% to 96% could attain an accuracy of +/- 10%. The results suggest that an accuracy of +/- 10% labeled claim is the least variation a pharmacist can expect when extemporaneously compounding prescriptions.