Sample records for error detection invariant

  1. A Stepwise Test Characteristic Curve Method to Detect Item Parameter Drift

    ERIC Educational Resources Information Center

    Guo, Rui; Zheng, Yi; Chang, Hua-Hua

    2015-01-01

    An important assumption of item response theory is item parameter invariance. Sometimes, however, item parameters are not invariant across different test administrations due to factors other than sampling error; this phenomenon is termed item parameter drift. Several methods have been developed to detect drifted items. However, most of the…

  2. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  3. A representation for error detection and recovery in robot task plans

    NASA Technical Reports Server (NTRS)

    Lyons, D. M.; Vijaykumar, R.; Venkataraman, S. T.

    1990-01-01

    A general definition is given of the problem of error detection and recovery in robot assembly systems, and a general representation is developed for dealing with the problem. This invariant representation involves a monitoring process which is concurrent, with one monitor per task plan. A plan hierarchy is discussed, showing how diagnosis and recovery can be handled using the representation.

  4. Measurement of the length of pedestrian crossings and detection of traffic lights from image data

    NASA Astrophysics Data System (ADS)

    Shioyama, Tadayoshi; Wu, Haiyuan; Nakamura, Naoki; Kitawaki, Suguru

    2002-09-01

    This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5% and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant.

  5. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  6. Extended observability of linear time-invariant systems under recurrent loss of output data

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok; Halevi, Yoram

    1989-01-01

    Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.

  7. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  8. Factor structure and longitudinal measurement invariance of the demand control support model: an evidence from the Swedish Longitudinal Occupational Survey of Health (SLOSH).

    PubMed

    Chungkham, Holendro Singh; Ingre, Michael; Karasek, Robert; Westerlund, Hugo; Theorell, Töres

    2013-01-01

    To examine the factor structure and to evaluate the longitudinal measurement invariance of the demand-control-support questionnaire (DCSQ), using the Swedish Longitudinal Occupational Survey of Health (SLOSH). A confirmatory factor analysis (CFA) and multi-group confirmatory factor analysis (MGCFA) models within the framework of structural equation modeling (SEM) have been used to examine the factor structure and invariance across time. Four factors: psychological demand, skill discretion, decision authority and social support, were confirmed by CFA at baseline, with the best fit obtained by removing the item repetitive work of skill discretion. A measurement error correlation (0.42) between work fast and work intensively for psychological demands was also detected. Acceptable composite reliability measures were obtained except for skill discretion (0.68). The invariance of the same factor structure was established, but caution in comparing mean levels of factors over time is warranted as lack of intercept invariance was evident. However, partial intercept invariance was established for work intensively. Our findings indicate that skill discretion and decision authority represent two distinct constructs in the retained model. However removing the item repetitive work along with either work fast or work intensively would improve model fit. Care should also be taken while making comparisons in the constructs across time. Further research should investigate invariance across occupations or socio-economic classes.

  9. Factor Structure and Longitudinal Measurement Invariance of the Demand Control Support Model: An Evidence from the Swedish Longitudinal Occupational Survey of Health (SLOSH)

    PubMed Central

    Chungkham, Holendro Singh; Ingre, Michael; Karasek, Robert; Westerlund, Hugo; Theorell, Töres

    2013-01-01

    Objectives To examine the factor structure and to evaluate the longitudinal measurement invariance of the demand-control-support questionnaire (DCSQ), using the Swedish Longitudinal Occupational Survey of Health (SLOSH). Methods A confirmatory factor analysis (CFA) and multi-group confirmatory factor analysis (MGCFA) models within the framework of structural equation modeling (SEM) have been used to examine the factor structure and invariance across time. Results Four factors: psychological demand, skill discretion, decision authority and social support, were confirmed by CFA at baseline, with the best fit obtained by removing the item repetitive work of skill discretion. A measurement error correlation (0.42) between work fast and work intensively for psychological demands was also detected. Acceptable composite reliability measures were obtained except for skill discretion (0.68). The invariance of the same factor structure was established, but caution in comparing mean levels of factors over time is warranted as lack of intercept invariance was evident. However, partial intercept invariance was established for work intensively. Conclusion Our findings indicate that skill discretion and decision authority represent two distinct constructs in the retained model. However removing the item repetitive work along with either work fast or work intensively would improve model fit. Care should also be taken while making comparisons in the constructs across time. Further research should investigate invariance across occupations or socio-economic classes. PMID:23950957

  10. a Weighted Closed-Form Solution for Rgb-D Data Registration

    NASA Astrophysics Data System (ADS)

    Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.

    2016-06-01

    Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.

  11. The performance of projective standardization for digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-09-01

    We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.

  12. Three invariant Hi-C interaction patterns: Applications to genome assembly.

    PubMed

    Oddes, Sivan; Zelig, Aviv; Kaplan, Noam

    2018-06-01

    Assembly of reference-quality genomes from next-generation sequencing data is a key challenge in genomics. Recently, we and others have shown that Hi-C data can be used to address several outstanding challenges in the field of genome assembly. This principle has since been developed in academia and industry, and has been used in the assembly of several major genomes. In this paper, we explore the central principles underlying Hi-C-based assembly approaches, by quantitatively defining and characterizing three invariant Hi-C interaction patterns on which these approaches can build: Intrachromosomal interaction enrichment, distance-dependent interaction decay and local interaction smoothness. Specifically, we evaluate to what degree each invariant pattern holds on a single locus level in different species, cell types and Hi-C map resolutions. We find that these patterns are generally consistent across species and cell types but are affected by sequencing depth, and that matrix balancing improves consistency of loci with all three invariant patterns. Finally, we overview current Hi-C-based assembly approaches in light of these invariant patterns and demonstrate how local interaction smoothness can be used to easily detect scaffolding errors in extremely sparse Hi-C maps. We suggest that simultaneously considering all three invariant patterns may lead to better Hi-C-based genome assembly methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  14. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures. Research Report. ETS RR-06-20

    ERIC Educational Resources Information Center

    Moses, Tim

    2006-01-01

    Population invariance is an important requirement of test equating. An equating function is said to be population invariant when the choice of (sub)population used to compute the equating function does not matter. In recent studies, the extent to which equating functions are population invariant is typically addressed in terms of practical…

  15. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. New evidence of factor structure and measurement invariance of the SDQ across five European nations.

    PubMed

    Ortuño-Sierra, Javier; Fonseca-Pedrero, Eduardo; Aritio-Solana, Rebeca; Velasco, Alvaro Moreno; de Luis, Edurne Chocarro; Schumann, Gunter; Cattrell, Anna; Flor, Herta; Nees, Frauke; Banaschewski, Tobias; Bokde, Arun; Whelan, Rob; Buechel, Christian; Bromberg, Uli; Conrod, Patricia; Frouin, Vincent; Papadopoulos, Dimitri; Gallinat, Juergen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Struve, Maren; Gowland, Penny; Paus, Tomáš; Poustka, Luise; Martinot, Jean-Luc; Paillère-Martinot, Marie-Laure; Vetter, Nora C; Smolka, Michael N; Lawrence, Claire

    2015-12-01

    The main purpose of the present study was to analyse the internal structure and to test the measurement invariance of the Strengths and Difficulties Questionnaire (SDQ), self-reported version, in five European countries. The sample consisted of 3012 adolescents aged between 12 and 17 years (M = 14.20; SD = 0.83). The five-factor model (with correlated errors added), and the five-factor model (with correlated errors added) with the reverse-worded items allowed to cross-load on the Prosocial subscale, displayed adequate goodness of-fit indices. Multi-group confirmatory factor analysis showed that the five-factor model (with correlated errors added) had partial strong measurement invariance by countries. A total of 11 of the 25 items were non-invariant across samples. The level of internal consistency of the Total difficulties score was 0.84, ranging between 0.69 and 0.78 for the SDQ subscales. The findings indicate that the SDQ's subscales need to be modified in various ways for screening emotional and behavioural problems in the five European countries that were analysed.

  17. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…

  18. Longitudinal Invariance of the Wechsler Intelligence Scale for Children--Fourth Edition in a Referral Sample

    ERIC Educational Resources Information Center

    Richerson, Lindsay P.; Watkins, Marley W.; Beaujean, A. Alexander

    2014-01-01

    Measurement invariance of the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) was investigated with a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. Configural, metric, and scalar invariance were found. However, the error variance of the Coding subtest was not constant…

  19. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  20. Cartan invariants and event horizon detection

    NASA Astrophysics Data System (ADS)

    Brooks, D.; Chavy-Waddy, P. C.; Coley, A. A.; Forget, A.; Gregoris, D.; MacCallum, M. A. H.; McNutt, D. D.

    2018-04-01

    We show that it is possible to locate the event horizon of a black hole (in arbitrary dimensions) by the zeros of certain Cartan invariants. This approach accounts for the recent results on the detection of stationary horizons using scalar polynomial curvature invariants, and improves upon them since the proposed method is computationally less expensive. As an application, we produce Cartan invariants that locate the event horizons for various exact four-dimensional and five-dimensional stationary, asymptotically flat (or (anti) de Sitter), black hole solutions and compare the Cartan invariants with the corresponding scalar curvature invariants that detect the event horizon.

  1. A Temporal Model of Level-Invariant, Tone-in-Noise Detection

    ERIC Educational Resources Information Center

    Berg, Bruce G.

    2004-01-01

    Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced.…

  2. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  3. Design principles in telescope development: invariance, innocence, and the costs

    NASA Astrophysics Data System (ADS)

    Steinbach, Manfred

    1997-03-01

    Instrument design is, for the most part, a battle against errors and costs. Passive methods of error damping are in many cases effective and inexpensive. This paper shows examples of error minimization in our design of telescopes, instrumentation and evaluation instruments.

  4. Efficient RPG detection in noisy 3D image data

    NASA Astrophysics Data System (ADS)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  5. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara; Peroni, Marta; Baroni, Guido

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application ofmore » contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT, providing a motion description comparable to expert manual identification, as confirmed by DIR.Conclusions: The application of the method to a 4D lung CT patient dataset demonstrated adaptive-SIFT potential as an automatic tool to detect landmarks for DIR regularization and internal motion quantification. Future works should include the optimization of the computational cost and the application of the method to other anatomical sites and image modalities.« less

  6. Detection and correction of patient movement in prostate brachytherapy seed reconstruction

    NASA Astrophysics Data System (ADS)

    Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram

    2005-05-01

    Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.

  7. A Test for Cluster Bias: Detecting Violations of Measurement Invariance across Clusters in Multilevel Data

    ERIC Educational Resources Information Center

    Jak, Suzanne; Oort, Frans J.; Dolan, Conor V.

    2013-01-01

    We present a test for cluster bias, which can be used to detect violations of measurement invariance across clusters in 2-level data. We show how measurement invariance assumptions across clusters imply measurement invariance across levels in a 2-level factor model. Cluster bias is investigated by testing whether the within-level factor loadings…

  8. Measurement invariance across educational levels and gender in 12-item Zarit Burden Interview (ZBI) on caregivers of people with dementia.

    PubMed

    Lin, Chung-Ying; Ku, Li-Jung Elizabeth; Pakpour, Amir H

    2017-11-01

    The Zarit Burden Interview (ZBI) is a commonly used self-report to assess caregiver burden. A 12-item short form of the ZBI has been developed; however, its measurement invariance has not been examined across some different demographics. It is unclear whether different genders and educational levels of a population interpret the ZBI items similarly. Therefore, this study aimed to examine the measurement invariance of the 12-item ZBI across gender and educational levels in a Taiwanese sample. Caregivers who had a family member with dementia (n = 270) completed the ZBI through telephone interviews. Three confirmatory factor analysis (CFA) models were conducted: Model 1 was the configural model, Model 2 constrained all factor loadings, Model 3 constrained all factor loadings and item intercepts. Multiple group CFAs and the differential item functioning (DIF) contrast under Rasch analyses were used to detect measurement invariance across males (n = 100) and females (n = 170) and across educational levels of junior high schools and below (n = 86) and senior high schools and above (n = 183). The fit index differences between models supported the measurement invariance across gender and across educational levels (∆ comparative fit index (CFI) = -0.010 and 0.003; ∆ root mean square error of approximation (RMSEA) = -0.006 to 0.004). No substantial DIF contrast was found across gender and educational levels (value = -0.36 to 0.29). The ZBI is appropriate for combined use and for comparisons in caregivers across gender and different educational levels in Taiwan.

  9. Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection

    NASA Astrophysics Data System (ADS)

    Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos

    2017-02-01

    We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.

  10. The scale invariant generator technique for quantifying anisotropic scale invariance

    NASA Astrophysics Data System (ADS)

    Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.

    1999-11-01

    Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.

  11. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  12. Invariance of parent ratings of the ADHD symptoms in Australian and Malaysian, and north European Australian and Malay Malaysia children: a mean and covariance structures analysis approach.

    PubMed

    Gomez, Rapson

    2009-03-01

    This study used the mean and covariance structures analysis approach to examine the equality or invariance of ratings of the 18 ADHD symptoms. 783 Australian and 928 Malaysian parents provided ratings for an ADHD rating scale. Invariance was tested across these groups (Comparison 1), and North European Australian (n = 623) and Malay Malaysian (n = 571, Comparison 2) groups. Results indicate support for form and item factor loading invariance; more than half the total number of symptoms showed item intercept invariance, and 14 symptoms showed invariance for error variances. There was invariance for both the factor variances and the covariance, and the latent mean scores for hyperactivity/impulsivity. For inattention latent scores, the Malaysian (Comparison 1) and Malay Malaysian (Comparison 2) groups had higher scores. These results indicate fairly good support for invariance for parent ratings of the ADHD symptoms across the groups compared.

  13. SimCheck: An Expressive Type System for Simulink

    NASA Technical Reports Server (NTRS)

    Roy, Pritam; Shankar, Natarajan

    2010-01-01

    MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.

  14. All optical logic for optical pattern recognition and networking applications

    NASA Astrophysics Data System (ADS)

    Khoury, Jed

    2017-05-01

    In this paper, we propose architectures for the implementation 16 Boolean optical gates from two inputs using externally pumped phase- conjugate Michelson interferometer. Depending on the gate to be implemented, some require single stage interferometer and others require two stages interferometer. The proposed optical gates can be used in several applications in optical networks including, but not limited to, all-optical packet routers switching, and all-optical error detection. The optical logic gates can also be used in recognition of noiseless rotation and scale invariant objects such as finger prints for home land security applications.

  15. Temperature-dependent errors in nuclear lattice simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dean; Thomson, Richard

    2007-06-15

    We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local 'well-tempered' lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.

  16. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    NASA Astrophysics Data System (ADS)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.

  17. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  18. Invariance and variability in interaction error-related potentials and their consequences for classification

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Kapeller, Christoph; Hintermüller, Christoph; Guger, Christoph; Peer, Angelika

    2017-12-01

    Objective. This paper discusses the invariance and variability in interaction error-related potentials (ErrPs), where a special focus is laid upon the factors of (1) the human mental processing required to assess interface actions (2) time (3) subjects. Approach. Three different experiments were designed as to vary primarily with respect to the mental processes that are necessary to assess whether an interface error has occurred or not. The three experiments were carried out with 11 subjects in a repeated-measures experimental design. To study the effect of time, a subset of the recruited subjects additionally performed the same experiments on different days. Main results. The ErrP variability across the different experiments for the same subjects was found largely attributable to the different mental processing required to assess interface actions. Nonetheless, we found that interaction ErrPs are empirically invariant over time (for the same subject and same interface) and to a lesser extent across subjects (for the same interface). Significance. The obtained results may be used to explain across-study variability of ErrPs, as well as to define guidelines for approaches to the ErrP classifier transferability problem.

  19. Issues associated with Galilean invariance on a moving solid boundary in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2017-01-01

    In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.

  20. A geometrical defect detection method for non-silicon MEMS part based on HU moment invariants of skeleton image

    NASA Astrophysics Data System (ADS)

    Cheng, Xu; Jin, Xin; Zhang, Zhijing; Lu, Jun

    2014-01-01

    In order to improve the accuracy of geometrical defect detection, this paper presented a method based on HU moment invariants of skeleton image. This method have four steps: first of all, grayscale images of non-silicon MEMS parts are collected and converted into binary images, secondly, skeletons of binary images are extracted using medialaxis- transform method, and then HU moment invariants of skeleton images are calculated, finally, differences of HU moment invariants between measured parts and qualified parts are obtained to determine whether there are geometrical defects. To demonstrate the availability of this method, experiments were carried out between skeleton images and grayscale images, and results show that: when defects of non-silicon MEMS part are the same, HU moment invariants of skeleton images are more sensitive than that of grayscale images, and detection accuracy is higher. Therefore, this method can more accurately determine whether non-silicon MEMS parts qualified or not, and can be applied to nonsilicon MEMS part detection system.

  1. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  2. Bounding the errors for convex dynamics on one or more polytopes.

    PubMed

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  3. Bounding the errors for convex dynamics on one or more polytopes

    NASA Astrophysics Data System (ADS)

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  4. Seasonal variation in size-dependent survival of juvenile Atlantic salmon (Salmo salar): Performance of multistate capture-mark-recapture models

    USGS Publications Warehouse

    Letcher, B.H.; Horton, G.E.

    2008-01-01

    We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.

  5. Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation

    PubMed Central

    Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2013-01-01

    The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379

  6. An estimation of distribution method for infrared target detection based on Copulas

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Zhang, Yiqun

    2015-10-01

    Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.

  7. Reducing Sun Exposure for Prevention of Skin Cancers: Factorial Invariance and Reliability of the Self-Efficacy Scale for Sun Protection

    PubMed Central

    Babbin, Steven F.; Yin, Hui-Qing; Rossi, Joseph S.; Redding, Colleen A.; Paiva, Andrea L.; Velicer, Wayne F.

    2015-01-01

    The Self-Efficacy Scale for Sun Protection consists of two correlated factors with three items each for Sunscreen Use and Avoidance. This study evaluated two crucial psychometric assumptions, factorial invariance and scale reliability, with a sample of adults (N = 1356) participating in a computer-tailored, population-based intervention study. A measure has factorial invariance when the model is the same across subgroups. Three levels of invariance were tested, from least to most restrictive: (1) Configural Invariance (nonzero factor loadings unconstrained); (2) Pattern Identity Invariance (equal factor loadings); and (3) Strong Factorial Invariance (equal factor loadings and measurement errors). Strong Factorial Invariance was a good fit for the model across seven grouping variables: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Internal consistency coefficient Alpha and factor rho scale reliability, respectively, were .84 and .86 for Sunscreen Use, .68 and .70 for Avoidance, and .78 and .78 for the global (total) scale. The psychometric evidence demonstrates strong empirical support that the scale is consistent, has internal validity, and can be used to assess population-based adult samples. PMID:26457203

  8. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  9. A Reduced Dimension Static, Linearized Kalman Filter and Smoother

    NASA Technical Reports Server (NTRS)

    Fukumori, I.

    1995-01-01

    An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.

  10. Examining Power and Type 1 Error for Step and Item Level Tests of Invariance: Investigating the Effect of the Number of Item Score Levels

    ERIC Educational Resources Information Center

    Ayodele, Alicia Nicole

    2017-01-01

    Within polytomous items, differential item functioning (DIF) can take on various forms due to the number of response categories. The lack of invariance at this level is referred to as differential step functioning (DSF). The most common DSF methods in the literature are the adjacent category log odds ratio (AC-LOR) estimator and cumulative…

  11. Detection of Structural Abnormalities Using Neural Nets

    NASA Technical Reports Server (NTRS)

    Zak, M.; Maccalla, A.; Daggumati, V.; Gulati, S.; Toomarian, N.

    1996-01-01

    This paper describes a feed-forward neural net approach for detection of abnormal system behavior based upon sensor data analyses. A new dynamical invariant representing structural parameters of the system is introduced in such a way that any structural abnormalities in the system behavior are detected from the corresponding changes to the invariant.

  12. Robust photometric invariant features from the color tensor.

    PubMed

    van de Weijer, Joost; Gevers, Theo; Smeulders, Arnold W M

    2006-01-01

    Luminance-based features are widely used as low-level input for computer vision applications, even when color data is available. The extension of feature detection to the color domain prevents information loss due to isoluminance and allows us to exploit the photometric information. To fully exploit the extra information in the color data, the vector nature of color data has to be taken into account and a sound framework is needed to combine feature and photometric invariance theory. In this paper, we focus on the structure tensor, or color tensor, which adequately handles the vector nature of color images. Further, we combine the features based on the color tensor with photometric invariant derivatives to arrive at photometric invariant features. We circumvent the drawback of unstable photometric invariants by deriving an uncertainty measure to accompany the photometric invariant derivatives. The uncertainty is incorporated in the color tensor, hereby allowing the computation of robust photometric invariant features. The combination of the photometric invariance theory and tensor-based features allows for detection of a variety of features such as photometric invariant edges, corners, optical flow, and curvature. The proposed features are tested for noise characteristics and robustness to photometric changes. Experiments show that the proposed features are robust to scene incidental events and that the proposed uncertainty measure improves the applicability of full invariants.

  13. Lorentz-invariant formulation of Cherenkov radiation by tachyons

    NASA Technical Reports Server (NTRS)

    Jones, F. C.

    1972-01-01

    Previous treatments of Cherenkov radiation, electromagnetic and gravitational, by tachyons were in error because the prescription employed to cut off the divergent integral over frequency is not a Lorentz invariant procedure. The resulting equation of motion for the tachyon is therefore not covariant. The proper procedure requires an extended, deformable distribution of charge or mass and yields a particularly simple form for the tachyon's world line, one that could be deduced from simple invariance considerations. It is shown that Cherenkov radiation by tachyons implys their ultimate annihilation with an antitachyon and demonstrates a disturbing property of tachyons, namely the impossibility of specifying arbitrary Cauchy data even in a purely classical theory.

  14. Expression-invariant representations of faces.

    PubMed

    Bronstein, Alexander M; Bronstein, Michael M; Kimmel, Ron

    2007-01-01

    Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition.

  15. A linear shift-invariant image preprocessing technique for multispectral scanner systems

    NASA Technical Reports Server (NTRS)

    Mcgillem, C. D.; Riemer, T. E.

    1973-01-01

    A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.

  16. Framework for Evaluating Loop Invariant Detection Games in Relation to Automated Dynamic Invariant Detectors

    DTIC Science & Technology

    2015-09-01

    Detectability ...............................................................................................37 Figure 20. Excel VBA Codes for Checker...National Vulnerability Database OS Operating System SQL Structured Query Language VC Verification Condition VBA Visual Basic for Applications...checks each of these assertions for detectability by Daikon. The checker is an Excel Visual Basic for Applications ( VBA ) script that checks the

  17. Invariance Detection within an Interactive System: A Perceptual Gateway to Language Development

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Hollich, George

    2010-01-01

    In this article, we hypothesize that "invariance detection," a general perceptual phenomenon whereby organisms attend to relatively stable patterns or regularities, is an important means by which infants tune in to various aspects of spoken language. In so doing, we synthesize a substantial body of research on detection of regularities across the…

  18. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  19. Solving the measurement invariance anchor item problem in item response theory.

    PubMed

    Meade, Adam W; Wright, Natalie A

    2012-09-01

    The efficacy of tests of differential item functioning (measurement invariance) has been well established. It is clear that when properly implemented, these tests can successfully identify differentially functioning (DF) items when they exist. However, an assumption of these analyses is that the metric for different groups is linked using anchor items that are invariant. In practice, however, it is impossible to be certain which items are DF and which are invariant. This problem of anchor items, or referent indicators, has long plagued invariance research, and a multitude of suggested approaches have been put forth. Unfortunately, the relative efficacy of these approaches has not been tested. This study compares 11 variations on 5 qualitatively different approaches from recent literature for selecting optimal anchor items. A large-scale simulation study indicates that for nearly all conditions, an easily implemented 2-stage procedure recently put forth by Lopez Rivas, Stark, and Chernyshenko (2009) provided optimal power while maintaining nominal Type I error. With this approach, appropriate anchor items can be easily and quickly located, resulting in more efficacious invariance tests. Recommendations for invariance testing are illustrated using a pedagogical example of employee responses to an organizational culture measure.

  20. Dimensionality and measurement invariance in the Satisfaction with Life Scale in Norway.

    PubMed

    Clench-Aas, Jocelyne; Nes, Ragnhild Bang; Dalgard, Odd Steffen; Aarø, Leif Edvard

    2011-10-01

    Results from previous studies examining the dimensionality and factorial invariance of the Satisfaction with Life Scale (SWLS) are inconsistent and often based on small samples. This study examines the factorial structure and factorial invariance of the SWLS in a Norwegian sample. Confirmatory factor analysis (AMOS) was conducted to explore dimensionality and test for measurement invariance in factor structure, factor loadings, intercepts, and residual variance across gender and four age groups in a large (N = 4,984), nationally representative sample of Norwegian men and women (15-79 years). The data supported a modified unidimensional structure. Factor loadings could be constrained to equality between the sexes, indicating metric invariance between genders. Further testing indicated invariance also at the strong and strict levels, thus allowing analyses involving group means. The SWLS was shown to be sensitive to age, however, at the strong and strict levels of invariance testing. In conclusion, the results in this Norwegian study seem to confirm that a unidimensional structure is acceptable, but that a modified single-factor model with correlations between error terms of items 4 and 5 is preferred. Additionally, comparisons may be made between the genders. Caution must be exerted when comparing age groups.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less

  2. Blackout detection as a multiobjective optimization problem.

    PubMed

    Chaudhary, A M; Trachtenberg, E A

    1991-01-01

    We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.

  3. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less

  4. [Confirmatory factor analysis of the short French version of the Center for Epidemiological Studies of Depression Scale (CES-D10) in adolescents].

    PubMed

    Cartierre, N; Coulon, N; Demerval, R

    2011-09-01

    Screening depressivity among adolescents is a key public health priority. In order to measure the severity of depressive symptomatology, a four-dimensional 20 items scale called "Center for Epidemiological Studies-Depression Scale" (CES-D) was developed. A shorter 10-item version was developed and validated (Andresen et al.). For this brief version, several authors supported a two-factor structure - Negative and Positive affect - but the relationship between the two reversed-worded items of the Positive affect factor could be better accounted for by correlated errors. The aim of this study is triple: firstly to test a French version of the CES-D10 among adolescents; secondly to test the relevance of a one-dimensional structure by considering error correlation for Positive affect items; finally to examine the extent to which this structural model is invariant across gender. The sample was composed of 269 French middle school adolescents (139 girls and 130 boys, mean age: 13.8, SD=0.65). Confirmatory Factorial Analyses (CFA) using the LISREL 8.52 were conducted in order to assess the adjustment to the data of three factor models: a one-factor model, a two-factor model (Positive and Negative affect) and a one-factor model with specification of correlated errors between the two reverse-worded items. Then, multigroup analysis was conducted to test the scale invariance for girls and boys. Internal consistency of the CES-D10 was satisfying for the adolescent sample (α=0.75). The best fitting model is the one-factor model with correlated errors between the two items of the previous Positive affect factor (χ(2)/dl=2.50; GFI=0.939; CFI=0.894; RMSEA=0.076). This model presented a better statistical fit to the data than the one-factor model without error correlation: χ(2)(diff) (1)=22.14, p<0.001. Then, the one-factor model with correlated errors was analyzed across separate samples of girls and boys. The model explains the data somewhat better for boys than for girls. The model's overall χ(2)(68) without equality constraints from the multigroup analysis was 107.98. The χ(2)(89) statistic for the model with equality-constrained factor loadings was 121.31. The change in the overall Chi(2) is not statistically significant. This result implies that the model is, therefore, invariant across gender. The mean scores were higher for girls than boys: 9.69 versus 7.19; t(267)=4.13, p<0.001. To conclude, and waiting for further research using the French version of the CES-D10 for adolescents, it appears that this short scale is generally acceptable and can be a useful tool for both research and practice. The scale invariance across gender has been demonstrated but the invariance across age must be tested too. Copyright © 2011 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  5. Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution

    NASA Astrophysics Data System (ADS)

    Álvarez, Ezequiel

    2012-08-01

    We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.

  6. Invariant target detection by a correlation radiometer

    NASA Astrophysics Data System (ADS)

    Murza, L. P.

    1986-12-01

    The paper is concerned with the problem of the optimal detection of a heat-emitting target by a two-channel radiometer with an unstable amplification circuit. An expression is obtained for an asymptotically sufficient detection statistic which is invariant to changes in the amplification coefficients of the channels. The algorithm proposed here can be implemented numerically using a relatively simple program.

  7. A New Strategy to Reduce Influenza Escape: Detecting Therapeutic Targets Constituted of Invariance Groups

    PubMed Central

    Lao, Julie; Vanet, Anne

    2017-01-01

    The pathogenicity of the different flu species is a real public health problem worldwide. To combat this scourge, we established a method to detect drug targets, reducing the possibility of escape. Besides being able to attach a drug candidate, these targets should have the main characteristic of being part of an essential viral function. The invariance groups that are sets of residues bearing an essential function can be detected genetically. They consist of invariant and synthetic lethal residues (interdependent residues not varying or slightly varying when together). We analyzed an alignment of more than 10,000 hemagglutinin sequences of influenza to detect six invariance groups, close in space, and on the protein surface. In parallel we identified five potential pockets on the surface of hemagglutinin. By combining these results, three potential binding sites were determined that are composed of invariance groups located respectively in the vestigial esterase domain, in the bottom of the stem and in the fusion area. The latter target is constituted of residues involved in the spring-loaded mechanism, an essential step in the fusion process. We propose a model describing how this potential target could block the reorganization of the hemagglutinin HA2 secondary structure and prevent viral entry into the host cell. PMID:28257108

  8. Flavor and topological current correlators in parity-invariant three-dimensional QED

    NASA Astrophysics Data System (ADS)

    Karthik, Nikhil; Narayanan, Rajamani

    2017-09-01

    We use lattice regularization to study the flow of the flavor-triplet fermion current central charge CJf from its free field value in the ultraviolet limit to its conformal value in the infrared limit of the parity-invariant three-dimensional QED with two flavors of two-component fermions. The dependence of CJf on the scale is weak with a tendency to be below the free field value at intermediate distances. Our numerical data suggest that the flavor-triplet fermion current and the topological current correlators become degenerate within numerical errors in the infrared limit, thereby supporting an enhanced O(4) symmetry predicted by strong self-duality. Further, we demonstrate that fermion dynamics is necessary for the scale-invariant behavior of parity-invariant three-dimensional QED by showing that the pure gauge theory with noncompact gauge action has a nonzero bilinear condensate.

  9. Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons

    PubMed Central

    Buhusi, Catalin V.; Oprisan, Sorinel A.

    2013-01-01

    In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297

  10. A scale-invariant keypoint detector in log-polar space

    NASA Astrophysics Data System (ADS)

    Tao, Tao; Zhang, Yun

    2017-02-01

    The scale-invariant feature transform (SIFT) algorithm is devised to detect keypoints via the difference of Gaussian (DoG) images. However, the DoG data lacks the high-frequency information, which can lead to a performance drop of the algorithm. To address this issue, this paper proposes a novel log-polar feature detector (LPFD) to detect scale-invariant blubs (keypoints) in log-polar space, which, in contrast, can retain all the image information. The algorithm consists of three components, viz. keypoint detection, descriptor extraction and descriptor matching. Besides, the algorithm is evaluated in detecting keypoints from the INRIA dataset by comparing with the SIFT algorithm and one of its fast versions, the speed up robust features (SURF) algorithm in terms of three performance measures, viz. correspondences, repeatability, correct matches and matching score.

  11. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    PubMed

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  13. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  14. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  15. A Game Theoretic Fault Detection Filter

    NASA Technical Reports Server (NTRS)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  16. Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation

    NASA Astrophysics Data System (ADS)

    Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.

    2018-04-01

    Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.

  17. Optical diffraction for measurements of nano-mechanical bending

    NASA Astrophysics Data System (ADS)

    Hermans, Rodolfo I.; Dueck, Benjamin; Ndieyira, Joseph Wafula; McKendry, Rachel A.; Aeppli, Gabriel

    2016-06-01

    We explore and exploit diffraction effects that have been previously neglected when modelling optical measurement techniques for the bending of micro-mechanical transducers such as cantilevers for atomic force microscopy. The illumination of a cantilever edge causes an asymmetric diffraction pattern at the photo-detector affecting the calibration of the measured signal in the popular optical beam deflection technique (OBDT). The conditions that avoid such detection artefacts conflict with the use of smaller cantilevers. Embracing diffraction patterns as data yields a potent detection technique that decouples tilt and curvature and simultaneously relaxes the requirements on the illumination alignment and detector position through a measurable which is invariant to translation and rotation. We show analytical results, numerical simulations and physiologically relevant experimental data demonstrating the utility of the diffraction patterns. We offer experimental design guidelines and quantify possible sources of systematic error in OBDT. We demonstrate a new nanometre resolution detection method that can replace OBDT, where diffraction effects from finite sized or patterned cantilevers are exploited. Such effects are readily generalized to cantilever arrays, and allow transmission detection of mechanical curvature, enabling instrumentation with simpler geometry. We highlight the comparative advantages over OBDT by detecting molecular activity of antibiotic Vancomycin.

  18. Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.

    PubMed

    Torney, Colin J; Dobson, Andrew P; Borner, Felix; Lloyd-Jones, David J; Moyer, David; Maliti, Honori T; Mwita, Machoke; Fredrick, Howard; Borner, Markus; Hopcraft, J Grant C

    2016-01-01

    Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.

  19. Out with the old, in with the new: Assessing change in screen time when measurement changes over time.

    PubMed

    Gunnell, Katie E; Brunet, Jennifer; Bélanger, Mathieu

    2018-03-01

    We examined if screen time can be assessed over time when the measurement protocol has changed to reflect advances in technology. Beginning in 2011, 929 youth (9-12 years at time one) living in in New Brunswick (Canada) self-reported the amount of time spent watching television (cycles 1-13), using computers (cycles 1-13), and playing video games (cycles 3-13). Using longitudinal invariance to test a shifting indicators model of screen time, we found that the relationships between the latent variable reflecting overall screen time and the indicators used to assess screen time were invariant across cycles (weak invariance). We also found that 31 out of 37 indicator intercepts were invariant, meaning that most indicators were answered similarly (i.e., on the same metric) across cycles (partial strong invariance), and that 28 out of 37 indicator residuals were invariant indicating that similar sources of error were present over time (partial strict invariance). Overall, across all survey cycles, 76% of indicators were fully invariant. Whereas issues were noted when new examples of screen-based technology (e.g., iPads) were added, having established partial invariance, we suggest it is still possible to assess change in screen time despite having changing indicators over time. Although it is not possible to draw definitive conclusions concerning other self-report measures of screen time, our findings may assist other researchers considering modifying self-report measures in longitudinal studies to reflect technological advancements and increase the precision of their results.

  20. The Impact of Partial Measurement Invariance on Testing Moderation for Single and Multi-Level Data

    PubMed Central

    Hsiao, Yu-Yu; Lai, Mark H. C.

    2018-01-01

    Moderation effect is a commonly used concept in the field of social and behavioral science. Several studies regarding the implication of moderation effects have been done; however, little is known about how partial measurement invariance influences the properties of tests for moderation effects when categorical moderators were used. Additionally, whether the impact is the same across single and multilevel data is still unknown. Hence, the purpose of the present study is twofold: (a) To investigate the performance of the moderation test in single-level studies when measurement invariance does not hold; (b) To examine whether unique features of multilevel data, such as intraclass correlation (ICC) and number of clusters, influence the effect of measurement non-invariance on the performance of tests for moderation. Simulation results indicated that falsely assuming measurement invariance lead to biased estimates, inflated Type I error rates, and more gain or more loss in power (depends on simulation conditions) for the test of moderation effects. Such patterns were more salient as sample size and the number of non-invariant items increase for both single- and multi-level data. With multilevel data, the cluster size seemed to have a larger impact than the number of clusters when falsely assuming measurement invariance in the moderation estimation. ICC was trivially related to the moderation estimates. Overall, when testing moderation effects with categorical moderators, employing a model that accounts for the measurement (non)invariance structure of the predictor and/or the outcome is recommended. PMID:29867692

  1. The Impact of Partial Measurement Invariance on Testing Moderation for Single and Multi-Level Data.

    PubMed

    Hsiao, Yu-Yu; Lai, Mark H C

    2018-01-01

    Moderation effect is a commonly used concept in the field of social and behavioral science. Several studies regarding the implication of moderation effects have been done; however, little is known about how partial measurement invariance influences the properties of tests for moderation effects when categorical moderators were used. Additionally, whether the impact is the same across single and multilevel data is still unknown. Hence, the purpose of the present study is twofold: (a) To investigate the performance of the moderation test in single-level studies when measurement invariance does not hold; (b) To examine whether unique features of multilevel data, such as intraclass correlation (ICC) and number of clusters, influence the effect of measurement non-invariance on the performance of tests for moderation. Simulation results indicated that falsely assuming measurement invariance lead to biased estimates, inflated Type I error rates, and more gain or more loss in power (depends on simulation conditions) for the test of moderation effects. Such patterns were more salient as sample size and the number of non-invariant items increase for both single- and multi-level data. With multilevel data, the cluster size seemed to have a larger impact than the number of clusters when falsely assuming measurement invariance in the moderation estimation. ICC was trivially related to the moderation estimates. Overall, when testing moderation effects with categorical moderators, employing a model that accounts for the measurement (non)invariance structure of the predictor and/or the outcome is recommended.

  2. Contributions of Invariants, Heuristics, and Exemplars to the Visual Perception of Relative Mass

    ERIC Educational Resources Information Center

    Cohen, Andrew L.

    2006-01-01

    Some potential contributions of invariants, heuristics, and exemplars to the perception of dynamic properties in the colliding balls task were explored. On each trial, an observer is asked to determine the heavier of 2 colliding balls. The invariant approach assumes that people can learn to detect complex visual patterns that reliably specify…

  3. Psychometric assessment of the processes of change scale for sun protection.

    PubMed

    Sillice, Marie A; Babbin, Steven F; Redding, Colleen A; Rossi, Joseph S; Paiva, Andrea L; Velicer, Wayne F

    2018-01-01

    The fourteen-factor Processes of Change Scale for Sun Protection assesses behavioral and experiential strategies that underlie the process of sun protection acquisition and maintenance. Variations of this measure have been used effectively in several randomized sun protection trials, both for evaluation and as a basis for intervention. However, there are no published studies, to date, that evaluate the psychometric properties of the scale. The present study evaluated factorial invariance and scale reliability in a national sample (N = 1360) of adults involved in a Transtheoretical model tailored intervention for exercise and sun protection, at baseline. Invariance testing ranged from least to most restrictive: Configural Invariance (constraints only factor structure and zero loadings); Pattern Identity Invariance (equal factor loadings across target groups); and Strong Factorial Invariance (equal factor loadings and measurement errors). Multi-sample structural equation modeling tested the invariance of the measurement model across seven subgroups: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Strong factorial invariance was found across all subgroups. Internal consistency coefficient Alpha and factor rho reliability, respectively, were .83 and .80 for behavioral processes, .91 and .89 for experiential processes, and .93 and .91 for the global scale. These results provide strong empirical evidence that the scale is consistent, has internal validity and can be used in research interventions with population-based adult samples.

  4. Stimulus background influences phase invariant coding by correlated neural activity

    PubMed Central

    Metzen, Michael G; Chacron, Maurice J

    2017-01-01

    Previously we reported that correlations between the activities of peripheral afferents mediate a phase invariant representation of natural communication stimuli that is refined across successive processing stages thereby leading to perception and behavior in the weakly electric fish Apteronotus leptorhynchus (Metzen et al., 2016). Here, we explore how phase invariant coding and perception of natural communication stimuli are affected by changes in the sinusoidal background over which they occur. We found that increasing background frequency led to phase locking, which decreased both detectability and phase invariant coding. Correlated afferent activity was a much better predictor of behavior as assessed from both invariance and detectability than single neuron activity. Thus, our results provide not only further evidence that correlated activity likely determines perception of natural communication signals, but also a novel explanation as to why these preferentially occur on top of low frequency as well as low-intensity sinusoidal backgrounds. DOI: http://dx.doi.org/10.7554/eLife.24482.001 PMID:28315519

  5. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  6. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  7. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  8. Circular blurred shape model for multiclass symbol recognition.

    PubMed

    Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia

    2011-04-01

    In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.

  9. Reflection symmetry detection using locally affine invariant edge correspondence.

    PubMed

    Wang, Zhaozhong; Tang, Zesheng; Zhang, Xiao

    2015-04-01

    Reflection symmetry detection receives increasing attentions in recent years. The state-of-the-art algorithms mainly use the matching of intensity-based features (such as the SIFT) within a single image to find symmetry axes. This paper proposes a novel approach by establishing the correspondence of locally affine invariant edge-based features, which are superior to the intensity based in the aspects that it is insensitive to illumination variations, and applicable to textureless objects. The locally affine invariance is achieved by simple linear algebra for efficient and robust computations, making the algorithm suitable for detections under object distortions like perspective projection. Commonly used edge detectors and a voting process are, respectively, used before and after the edge description and matching steps to form a complete reflection detection pipeline. Experiments are performed using synthetic and real-world images with both multiple and single reflection symmetry axis. The test results are compared with existing algorithms to validate the proposed method.

  10. A calibration method immune to the projector errors in fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Guo, Hongwei

    2017-08-01

    In fringe projection technique, system calibration is a tedious task to establish the mapping relationship between the object depths and the fringe phases. Especially, it is not easy to accurately determine the parameters of the projector in this system, which may induce errors in the measurement results. To solve this problem, this paper proposes a new calibration by using the cross-ratio invariance in the system geometry for determining the phase-to-depth relations. In it, we analyze the epipolar eometry of the fringe projection system. On each epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. These depth variations and pixel movements can be connected by use of the projective transformations, under which condition the cross-ratio for each of them keeps invariant. Based on this fact, we suggest measuring the depth map by use of this cross-ratio invariance. Firstly, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase on the image plane of the camera. This method is immune to the errors sourced from the projector, including the distortions both in the geometric shapes and in the intensity profiles of the projected fringe patterns.The experimental results demonstrate the proposed method to be feasible and valid.

  11. SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS.

    PubMed

    Thiede, Erik; VAN Koten, Brian; Weare, Jonathan

    For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.

  12. Convolution neural networks for real-time needle detection and localization in 2D ultrasound.

    PubMed

    Mwikirize, Cosmas; Nosher, John L; Hacihaliloglu, Ilker

    2018-05-01

    We propose a framework for automatic and accurate detection of steeply inserted needles in 2D ultrasound data using convolution neural networks. We demonstrate its application in needle trajectory estimation and tip localization. Our approach consists of a unified network, comprising a fully convolutional network (FCN) and a fast region-based convolutional neural network (R-CNN). The FCN proposes candidate regions, which are then fed to a fast R-CNN for finer needle detection. We leverage a transfer learning paradigm, where the network weights are initialized by training with non-medical images, and fine-tuned with ex vivo ultrasound scans collected during insertion of a 17G epidural needle into freshly excised porcine and bovine tissue at depth settings up to 9 cm and [Formula: see text]-[Formula: see text] insertion angles. Needle detection results are used to accurately estimate needle trajectory from intensity invariant needle features and perform needle tip localization from an intensity search along the needle trajectory. Our needle detection model was trained and validated on 2500 ex vivo ultrasound scans. The detection system has a frame rate of 25 fps on a GPU and achieves 99.6% precision, 99.78% recall rate and an [Formula: see text] score of 0.99. Validation for needle localization was performed on 400 scans collected using a different imaging platform, over a bovine/porcine lumbosacral spine phantom. Shaft localization error of [Formula: see text], tip localization error of [Formula: see text] mm, and a total processing time of 0.58 s were achieved. The proposed method is fully automatic and provides robust needle localization results in challenging scanning conditions. The accurate and robust results coupled with real-time detection and sub-second total processing make the proposed method promising in applications for needle detection and localization during challenging minimally invasive ultrasound-guided procedures.

  13. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  14. Detection of ferromagnetic target based on mobile magnetic gradient tensor system

    NASA Astrophysics Data System (ADS)

    Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren

    2016-03-01

    Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.

  15. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  16. Topological Quantum Phase Transition in Synthetic Non-Abelian Gauge Potential: Gauge Invariance and Experimental Detections

    PubMed Central

    Sun, Fadi; Yu, Xiao-Lu; Ye, Jinwu; Fan, Heng; Liu, Wu-Ming

    2013-01-01

    The method of synthetic gauge potentials opens up a new avenue for our understanding and discovering novel quantum states of matter. We investigate the topological quantum phase transition of Fermi gases trapped in a honeycomb lattice in the presence of a synthetic non-Abelian gauge potential. We develop a systematic fermionic effective field theory to describe a topological quantum phase transition tuned by the non-Abelian gauge potential and explore its various important experimental consequences. Numerical calculations on lattice scales are performed to compare with the results achieved by the fermionic effective field theory. Several possible experimental detection methods of topological quantum phase transition are proposed. In contrast to condensed matter experiments where only gauge invariant quantities can be measured, both gauge invariant and non-gauge invariant quantities can be measured by experimentally generating various non-Abelian gauges corresponding to the same set of Wilson loops. PMID:23846153

  17. Symmetry as Bias: Rediscovering Special Relativity

    NASA Technical Reports Server (NTRS)

    Lowry, Michael R.

    1992-01-01

    This paper describes a rational reconstruction of Einstein's discovery of special relativity, validated through an implementation: the Erlanger program. Einstein's discovery of special relativity revolutionized both the content of physics and the research strategy used by theoretical physicists. This research strategy entails a mutual bootstrapping process between a hypothesis space for biases, defined through different postulated symmetries of the universe, and a hypothesis space for physical theories. The invariance principle mutually constrains these two spaces. The invariance principle enables detecting when an evolving physical theory becomes inconsistent with its bias, and also when the biases for theories describing different phenomena are inconsistent. Structural properties of the invariance principle facilitate generating a new bias when an inconsistency is detected. After a new bias is generated. this principle facilitates reformulating the old, inconsistent theory by treating the latter as a limiting approximation. The structural properties of the invariance principle can be suitably generalized to other types of biases to enable primal-dual learning.

  18. Multiview human activity recognition system based on spatiotemporal template for video surveillance system

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    An efficient view invariant framework for the recognition of human activities from an input video sequence is presented. The proposed framework is composed of three consecutive modules: (i) detect and locate people by background subtraction, (ii) view invariant spatiotemporal template creation for different activities, (iii) and finally, template matching is performed for view invariant activity recognition. The foreground objects present in a scene are extracted using change detection and background modeling. The view invariant templates are constructed using the motion history images and object shape information for different human activities in a video sequence. For matching the spatiotemporal templates for various activities, the moment invariants and Mahalanobis distance are used. The proposed approach is tested successfully on our own viewpoint dataset, KTH action recognition dataset, i3DPost multiview dataset, MSR viewpoint action dataset, VideoWeb multiview dataset, and WVU multiview human action recognition dataset. From the experimental results and analysis over the chosen datasets, it is observed that the proposed framework is robust, flexible, and efficient with respect to multiple views activity recognition, scale, and phase variations.

  19. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  20. Constraints on the invariant functions of axisymmetric turbulence

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.

    1983-01-01

    Constraints are derived for the two invariant functions Q1 and Q2 that occur in Chandrasekhar's (1950) development of the axisymmetric turbulence theory. These constraints must be satisfied for the correlation tensor derived from Q1 and Q2 to be that of a stationary random process, i.e., for the turbulence to be realizable. The equivalent results in spectrum space are also developed. Applications of the constraints in aerodynamic noise modeling are discussed. It is shown that significant errors in prediction can be introduced by the use of turbulence models which violate the constraints.

  1. Breast cancer detection via Hu moment invariant and feedforward neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah

    2018-04-01

    One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.

  2. Robust Frequency Invariant Beamforming with Low Sidelobe for Speech Enhancement

    NASA Astrophysics Data System (ADS)

    Zhu, Yiting; Pan, Xiang

    2018-01-01

    Frequency invariant beamformers (FIBs) are widely used in speech enhancement and source localization. There are two traditional optimization methods for FIB design. The first one is convex optimization, which is simple but the frequency invariant characteristic of the beam pattern is poor with respect to frequency band of five octaves. The least squares (LS) approach using spatial response variation (SRV) constraint is another optimization method. Although, it can provide good frequency invariant property, it usually couldn’t be used in speech enhancement for its lack of weight norm constraint which is related to the robustness of a beamformer. In this paper, a robust wideband beamforming method with a constant beamwidth is proposed. The frequency invariant beam pattern is achieved by resolving an optimization problem of the SRV constraint to cover speech frequency band. With the control of sidelobe level, it is available for the frequency invariant beamformer (FIB) to prevent distortion of interference from the undesirable direction. The approach is completed in time-domain by placing tapped delay lines(TDL) and finite impulse response (FIR) filter at the output of each sensor which is more convenient than the Frost processor. By invoking the weight norm constraint, the robustness of the beamformer is further improved against random errors. Experiment results show that the proposed method has a constant beamwidth and almost the same white noise gain as traditional delay-and-sum (DAS) beamformer.

  3. Are patient specific meshes required for EIT head imaging?

    PubMed

    Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David

    2016-06-01

    Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.

  4. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Han, Byunghyun; Mostafid, M. Erfan

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer Photoacoustic infrared spectroscopy tested for measuring tracer gas in landfills. Black-Right-Pointing-Pointer Measurement errors for tracer gases were 1-3% in landfill gas. Black-Right-Pointing-Pointer Background signals from landfill gas result in elevated limits of detection. Black-Right-Pointing-Pointer Technique is much less expensive and easier to use than GC. - Abstract: Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobilitymore » and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF{sub 6}), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences.« less

  5. Magnetic Resonance Imaging–Guided versus Surrogate-Based Motion Tracking in Liver Radiation Therapy: A Prospective Comparative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it; Seregni, Matteo; Fattori, Giovanni

    Purpose: This study applied automatic feature detection on cine–magnetic resonance imaging (MRI) liver images in order to provide a prospective comparison between MRI-guided and surrogate-based tracking methods for motion-compensated liver radiation therapy. Methods and Materials: In a population of 30 subjects (5 volunteers plus 25 patients), 2 oblique sagittal slices were acquired across the liver at high temporal resolution. An algorithm based on scale invariant feature transform (SIFT) was used to extract and track multiple features throughout the image sequence. The position of abdominal markers was also measured directly from the image series, and the internal motion of each featuremore » was quantified through multiparametric analysis. Surrogate-based tumor tracking with a state-of-the-art external/internal correlation model was simulated. The geometrical tracking error was measured, and its correlation with external motion parameters was also investigated. Finally, the potential gain in tracking accuracy relying on MRI guidance was quantified as a function of the maximum allowed tracking error. Results: An average of 45 features was extracted for each subject across the whole liver. The multi-parametric motion analysis reported relevant inter- and intrasubject variability, highlighting the value of patient-specific and spatially-distributed measurements. Surrogate-based tracking errors (relative to the motion amplitude) were were in the range 7% to 23% (1.02-3.57mm) and were significantly influenced by external motion parameters. The gain of MRI guidance compared to surrogate-based motion tracking was larger than 30% in 50% of the subjects when considering a 1.5-mm tracking error tolerance. Conclusions: Automatic feature detection applied to cine-MRI allows detailed liver motion description to be obtained. Such information was used to quantify the performance of surrogate-based tracking methods and to provide a prospective comparison with respect to MRI-guided radiation therapy, which could support the definition of patient-specific optimal treatment strategies.« less

  6. Invariants of polarization transformations.

    PubMed

    Sadjadi, Firooz A

    2007-05-20

    The use of polarization-sensitive sensors is being explored in a variety of applications. Polarization diversity has been shown to improve the performance of the automatic target detection and recognition in a significant way. However, it also brings out the problems associated with processing and storing more data and the problem of polarization distortion during transmission. We present a technique for extracting attributes that are invariant under polarization transformations. The polarimetric signatures are represented in terms of the components of the Stokes vectors. Invariant algebra is then used to extract a set of signature-related attributes that are invariant under linear transformation of the Stokes vectors. Experimental results using polarimetric infrared signatures of a number of manmade and natural objects undergoing systematic linear transformations support the invariancy of these attributes.

  7. Sample Errors Call Into Question Conclusions Regarding Same-Sex Married Parents: A Comment on "Family Structure and Child Health: Does the Sex Composition of Parents Matter?"

    PubMed

    Paul Sullins, D

    2017-12-01

    Because of classification errors reported by the National Center for Health Statistics, an estimated 42 % of the same-sex married partners in the sample for this study are misclassified different-sex married partners, thus calling into question findings regarding same-sex married parents. Including biological parentage as a control variable suppresses same-sex/different-sex differences, thus obscuring the data error. Parentage is not appropriate as a control because it correlates nearly perfectly (+.97, gamma) with the same-sex/different-sex distinction and is invariant for the category of joint biological parents.

  8. Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations.

    PubMed

    Cheng, Wei; Zhang, Kai; Chen, Haifeng; Jiang, Guofei; Chen, Zhengzhang; Wang, Wei

    2016-08-01

    Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.

  9. Comment on 'Controversy concerning the definition of quark and gluon angular momentum' by Elliot Leader [PRD 83, 096012 (2011)

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen; Liu, Keh-Fei

    2012-03-01

    It is argued by the author that the canonical form of the quark energy-momentum tensor with a partial derivative instead of the covariant derivative is the correct definition for the quark momentum and angular momentum fraction of the nucleon in covariant quantization. Although it is not manifestly gauge-invariant, its matrix elements in the nucleon will be nonvanishing and are gauge-invariant. We test this idea in the path-integral quantization by calculating correlation functions on the lattice with a gauge-invariant nucleon interpolation field and replacing the gauge link in the quark lattice momentum operator with unity, which corresponds to the partial derivative in the continuum. We find that the ratios of three-point to two-point functions are zero within errors for both the u and d quarks, contrary to the case without setting the gauge links to unity.

  10. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  11. The Hispanic Americans Baseline Alcohol Survey (HABLAS):Predictive invariance of Demographic Characteristics on Attitudes towards Alcohol across Hispanic National Groups#

    PubMed Central

    Mills, Britain A.; Caetano, Raul; Bernstein, Ira H.

    2011-01-01

    This study compares the demographic predictors of items assessing attitudes towards drinking across Hispanic national groups. Data were from the 2006 Hispanic Americans Baseline Alcohol Survey (HABLAS), which used a multistage cluster sample design to interview 5,224 individuals randomly selected from the household population in Miami, New York, Philadelphia, Houston, and Los Angeles. Predictive invariance of demographic predictors of alcohol attitudes over four Hispanic national groups (Puerto Rican, Cuban, Mexican, and South/Central Americans) was examined using multiple-group seemingly unrelated probit regression. The analyses examined whether the influence of various demographic predictors varied across the Hispanic national groups in their regression coefficients, item intercepts, and error correlations. The hypothesis of predictive invariance was supported. Hispanic groups did not differ in how demographic predictors related to individual attitudinal items (regression slopes were invariant). In addition, the groups did not differ in attitudinal endorsement rates once demographic covariates were taken into account (item intercepts were invariant). Although Hispanic groups have different attitudes about alcohol, the influence of multiple demographic characteristics on alcohol attitudes operates similarly across Hispanic groups. Future models of drinking behavior in adult Hispanics need not posit moderating effects of group on the relation between these background characteristics and attitudes. PMID:25379120

  12. Real-time pose invariant logo and pattern detection

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Kottmann, Michal; Benesova, Wanda

    2011-01-01

    The detection of pose invariant planar patterns has many practical applications in computer vision and surveillance systems. The recognition of company logos is used in market studies to examine the visibility and frequency of logos in advertisement. Danger signs on vehicles could be detected to trigger warning systems in tunnels, or brand detection on transport vehicles can be used to count company-specific traffic. We present the results of a study on planar pattern detection which is based on keypoint detection and matching of distortion invariant 2d feature descriptors. Specifically we look at the keypoint detectors of type: i) Lowe's DoG approximation from the SURF algorithm, ii) the Harris Corner Detector, iii) the FAST Corner Detector and iv) Lepetit's keypoint detector. Our study then compares the feature descriptors SURF and compact signatures based on Random Ferns: we use 3 sets of sample images to detect and match 3 logos of different structure to find out which combinations of keypoint detector/feature descriptors work well. A real-world test tries to detect vehicles with a distinctive logo in an outdoor environment under realistic lighting and weather conditions: a camera was mounted on a suitable location for observing the entrance to a parking area so that incoming vehicles could be monitored. In this 2 hour long recording we can successfully detect a specific company logo without false positives.

  13. Real-time implementation of logo detection on open source BeagleBoard

    NASA Astrophysics Data System (ADS)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  14. Pattern recognition invariant under changes of scale and orientation

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  15. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  16. A biologically inspired neural network model to transformation invariant object recognition

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz

    2007-09-01

    Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to perform a successful recognition task in general. Further, the residual critic error in DHP is generally smaller than that of HDP, and DHP achieves a 100% success rate more frequently than HDP for individual objects/subjects. On the other hand, HDP is more robust than the DHP as far as success rate across the database is concerned when applied in a stochastic and uncertain environment, and the computational time involved in DHP is more.

  17. Fine-Scale Population Estimation by 3D Reconstruction of Urban Residential Buildings

    PubMed Central

    Wang, Shixin; Tian, Ye; Zhou, Yi; Liu, Wenliang; Lin, Chenxi

    2016-01-01

    Fine-scale population estimation is essential in emergency response and epidemiological applications as well as urban planning and management. However, representing populations in heterogeneous urban regions with a finer resolution is a challenge. This study aims to obtain fine-scale population distribution based on 3D reconstruction of urban residential buildings with morphological operations using optical high-resolution (HR) images from the Chinese No. 3 Resources Satellite (ZY-3). Specifically, the research area was first divided into three categories when dasymetric mapping was taken into consideration. The results demonstrate that the morphological building index (MBI) yielded better results than built-up presence index (PanTex) in building detection, and the morphological shadow index (MSI) outperformed color invariant indices (CIIT) in shadow extraction and height retrieval. Building extraction and height retrieval were then combined to reconstruct 3D models and to estimate population. Final results show that this approach is effective in fine-scale population estimation, with a mean relative error of 16.46% and an overall Relative Total Absolute Error (RATE) of 0.158. This study gives significant insights into fine-scale population estimation in complicated urban landscapes, when detailed 3D information of buildings is unavailable. PMID:27775670

  18. Multiclass Classification of Cardiac Arrhythmia Using Improved Feature Selection and SVM Invariants.

    PubMed

    Mustaqeem, Anam; Anwar, Syed Muhammad; Majid, Muahammad

    2018-01-01

    Arrhythmia is considered a life-threatening disease causing serious health issues in patients, when left untreated. An early diagnosis of arrhythmias would be helpful in saving lives. This study is conducted to classify patients into one of the sixteen subclasses, among which one class represents absence of disease and the other fifteen classes represent electrocardiogram records of various subtypes of arrhythmias. The research is carried out on the dataset taken from the University of California at Irvine Machine Learning Data Repository. The dataset contains a large volume of feature dimensions which are reduced using wrapper based feature selection technique. For multiclass classification, support vector machine (SVM) based approaches including one-against-one (OAO), one-against-all (OAA), and error-correction code (ECC) are employed to detect the presence and absence of arrhythmias. The SVM method results are compared with other standard machine learning classifiers using varying parameters and the performance of the classifiers is evaluated using accuracy, kappa statistics, and root mean square error. The results show that OAO method of SVM outperforms all other classifiers by achieving an accuracy rate of 81.11% when used with 80/20 data split and 92.07% using 90/10 data split option.

  19. Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.

    PubMed

    Zhenwei Miao; Xudong Jiang; Kim-Hui Yap

    2016-01-01

    The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.

  20. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X; Chang, J

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less

  1. Unraveling the nature of autism: finding order amid change

    PubMed Central

    Hellendoorn, Annika; Wijnroks, Lex; Leseman, Paul P. M.

    2015-01-01

    In this article, we hypothesize that individuals with autism spectrum disorder (ASD) are born with a deficit in invariance detection, which is a learning process whereby people and animals come to attend the relatively stable patterns or structural regularities in the changing stimulus array. This paper synthesizes a substantial body of research which suggests that a deficit in the domain-general perceptual learning process of invariant detection in ASD can lead to a cascade of consequences in different developmental domains. We will outline how this deficit in invariant detection can cause uncertainty, unpredictability, and a lack of control for individuals with ASD and how varying degrees of impairments in this learning process can account for the heterogeneity of the ASD phenotype. We also describe how differences in neural plasticity in ASD underlie the impairments in perceptual learning. The present account offers an alternative to prior theories and contributes to the challenge of understanding the developmental trajectories that result in the variety of autistic behaviors. PMID:25870581

  2. Addressing the Lack of Measurement Invariance for the Measure of Acceptance of the Theory of Evolution

    NASA Astrophysics Data System (ADS)

    Wagler, Amy; Wagler, Ron

    2013-09-01

    The Measure of Acceptance of the Theory of Evolution (MATE) was constructed to be a single-factor instrument that assesses an individual's overall acceptance of evolutionary theory. The MATE was validated and the scores resulting from the MATE were found to be reliable for the population of inservice high school biology teachers. However, many studies have utilized the MATE for different populations, such as university students enrolled in a biology or genetics course, high school students, and preservice teachers. This is problematic because the dimensionality and reliability of the MATE may not be consistent across populations. It is not uncommon in science education research to find examples where scales are applied to novel populations without proper assessment of the validity and reliability. In order to illustrate this issue, a case study is presented where the dimensionality of the MATE is evaluated for a population of non-science major preservice elementary teachers. With this objective in mind, factor analytic and item response models are fit to the observed data to provide evidence for or against a one-dimensional latent structure and to detect which items do not conform to the theoretical construct for this population. The results of this study call into question any findings and conclusions made using the MATE for a Hispanic population of preservice teachers and point out the error of assuming invariance across substantively different populations.

  3. Comparing the Psychometric Properties of Two Physical Activity Self-Efficacy Instruments in Urban, Adolescent Girls: Validity, Measurement Invariance, and Reliability

    PubMed Central

    Voskuil, Vicki R.; Pierce, Steven J.; Robbins, Lorraine B.

    2017-01-01

    Aims: This study compared the psychometric properties of two self-efficacy instruments related to physical activity. Factorial validity, cross-group and longitudinal invariance, and composite reliability were examined. Methods: Secondary analysis was conducted on data from a group randomized controlled trial investigating the effect of a 17-week intervention on increasing moderate to vigorous physical activity among 5th–8th grade girls (N = 1,012). Participants completed a 6-item Physical Activity Self-Efficacy Scale (PASE) and a 7-item Self-Efficacy for Exercise Behaviors Scale (SEEB) at baseline and post-intervention. Confirmatory factor analyses for intervention and control groups were conducted with Mplus Version 7.4 using robust weighted least squares estimation. Model fit was evaluated with the chi-square index, comparative fit index, and root mean square error of approximation. Composite reliability for latent factors with ordinal indicators was computed from Mplus output using SAS 9.3. Results: Mean age of the girls was 12.2 years (SD = 0.96). One-third of the girls were obese. Girls represented a diverse sample with over 50% indicating black race and an additional 19% identifying as mixed or other race. Both instruments demonstrated configural invariance for simultaneous analysis of cross-group and longitudinal invariance based on alternative fit indices. However, simultaneous metric invariance was not met for the PASE or the SEEB instruments. Partial metric invariance for the simultaneous analysis was achieved for the PASE with one factor loading identified as non-invariant. Partial metric invariance was not met for the SEEB. Longitudinal scalar invariance was achieved for both instruments in the control group but not the intervention group. Composite reliability for the PASE ranged from 0.772 to 0.842. Reliability for the SEEB ranged from 0.719 to 0.800 indicating higher reliability for the PASE. Reliability was more stable over time in the control group for both instruments. Conclusions: Results suggest that the intervention influenced how girls responded to indicator items. Neither of the instruments achieved simultaneous metric invariance making it difficult to assess mean differences in PA self-efficacy between groups. PMID:28824487

  4. Event-triggered attitude control of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Baolin; Shen, Qiang; Cao, Xibin

    2018-02-01

    The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation.

  5. Failure detection and identification

    NASA Technical Reports Server (NTRS)

    Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.

    1989-01-01

    Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.

  6. Design and test of a situation-augmented display for an unmanned aerial vehicle monitoring task.

    PubMed

    Lu, Jen-Li; Horng, Ruey-Yun; Chao, Chin-Jung

    2013-08-01

    In this study, a situation-augmented display for unmanned aerial vehicle (UAV) monitoring was designed, and its effects on operator performance and mental workload were examined. The display design was augmented with the knowledge that there is an invariant flight trajectory (formed by the relationship between altitude and velocity) for every flight, from takeoff to landing. 56 participants were randomly assigned to the situation-augmented display or a conventional display condition to work on 4 (number of abnormalities) x 2 (noise level) UAV monitoring tasks three times. Results showed that the effects of situation-augmented display on flight completion time and time to detect abnormalities were robust under various workload conditions, but error rate and perceived mental workload were unaffected by the display type. Results suggest that the UAV monitoring task is extremely difficult, and that display devices providing high-level situation-awareness may improve operator monitoring performance.

  7. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  8. Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images.

    PubMed

    Al-Khafaji, Suhad Lateef; Jun Zhou; Zia, Ali; Liew, Alan Wee-Chung

    2018-02-01

    Spectral-spatial feature extraction is an important task in hyperspectral image processing. In this paper we propose a novel method to extract distinctive invariant features from hyperspectral images for registration of hyperspectral images with different spectral conditions. Spectral condition means images are captured with different incident lights, viewing angles, or using different hyperspectral cameras. In addition, spectral condition includes images of objects with the same shape but different materials. This method, which is named spectral-spatial scale invariant feature transform (SS-SIFT), explores both spectral and spatial dimensions simultaneously to extract spectral and geometric transformation invariant features. Similar to the classic SIFT algorithm, SS-SIFT consists of keypoint detection and descriptor construction steps. Keypoints are extracted from spectral-spatial scale space and are detected from extrema after 3D difference of Gaussian is applied to the data cube. Two descriptors are proposed for each keypoint by exploring the distribution of spectral-spatial gradient magnitude in its local 3D neighborhood. The effectiveness of the SS-SIFT approach is validated on images collected in different light conditions, different geometric projections, and using two hyperspectral cameras with different spectral wavelength ranges and resolutions. The experimental results show that our method generates robust invariant features for spectral-spatial image matching.

  9. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  10. Cross-cultural adaptation of the Health Education Impact Questionnaire: experimental study showed expert committee, not back-translation, added value.

    PubMed

    Epstein, Jonathan; Osborne, Richard H; Elsworth, Gerald R; Beaton, Dorcas E; Guillemin, Francis

    2015-04-01

    To assess the contribution of back-translation and expert committee to the content and psychometric properties of a translated multidimensional questionnaire. Recommendations for questionnaire translation include back-translation and expert committee, but their contribution to measurement properties is unknown. Four English to French translations of the Health Education Impact Questionnaire were generated with and without committee or back-translation. Face validity, acceptability, and structural properties were compared after random assignment to people with rheumatoid arthritis (N = 1,168), chronic renal failure (N = 2,368), and diabetes (N = 538). For face validity, 15 bilingual people compared translations quality with the original. Psychometric properties were examined using confirmatory factor analysis (metric and scalar invariance) and item response theory. Qualitatively, there were five types of translation errors: style, intensity, frequency/time frame, breadth, and meaning. Bilingual assessors ranked best the translations with committee (P = 0.0026). All translations had good structural properties (root mean square error of approximation <0.05; comparative fit index [CFI], ≥0.899; and Tucker-Lewis index, ≥0.889). Full measurement invariance was observed between translations (ΔCFI ≤ 0.01) with metric invariance between translations and original (lowest ΔCFI = 0.022 between fully constrained models and models with free intercepts). Item characteristic curve analyses revealed no significant differences. This is the first experimental evidence that back-translation has moderate impact, whereas expert committee helps to ensure accurate content. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  12. Invariants for correcting field polarisation effect in MT-VLF resistivity mapping

    NASA Astrophysics Data System (ADS)

    Guérin, Roger; Tabbagh, Alain; Benderitter, Yves; Andrieux, Pierre

    1994-12-01

    MT-VLF resistivity mapping is well suited to perform hydrology and environment studies. However, the apparent anistropy generated by the polarisation of the primary field requires the use of two transmitters at a right angle to each other in order to prevent errors in interpretation. We propose a processing technique that uses approximate invariants derived from classical developments in tensor magnetotellurics. They consist of the calculation at each station of ?. Both synthetic and field cases show that they give identical results and correct perfectly for the apparent anisotropy generated by the polarisation of the transmitted field. They should be preferred to verticalization of the electric field which remains of interest when only transmitter data are available.

  13. Infrared Ship Classification Using A New Moment Pattern Recognition Concept

    NASA Astrophysics Data System (ADS)

    Casasent, David; Pauly, John; Fetterly, Donald

    1982-03-01

    An analysis of the statistics of the moments and the conventional invariant moments shows that the variance of the latter become quite large as the order of the moments and the degree of invariance increases. Moreso, the need to whiten the error volume increases with the order and degree, but so does the computational load associated with computing the whitening operator. We thus advance a new estimation approach to the use of moments in pattern recog-nition that overcomes these problems. This work is supported by experimental verification and demonstration on an infrared ship pattern recognition problem. The computational load associated with our new algorithm is also shown to be very low.

  14. Remote sensing depth invariant index parameters in shallow benthic habitats for bottom type classification.

    NASA Astrophysics Data System (ADS)

    Gapper, J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    Ground cover prediction of benthic habitats using remote sensing imagery requires substantial feature engineering. Artifacts that confound the ground cover characteristics must be severely reduced or eliminated while the distinguishing features must be exposed. In particular, the impact of wavelength attenuation in the water column means that a machine learning algorithm will primarily detect depth. However, the per pixel depths are difficult to know on a grand scale. Previous research has taken an in situ approach to applying depth invariant index on a small area of interest within a Landsat 8 scene. We aim to abstract this process for application to entire Landsat scene as well as other locations in order to study change detection in shallow benthic zones on a global scale. We have developed a methodology and applied it to more than 25 different Landsat 8 scenes. The images were first preprocessed to mask land, clouds, and other distortions then atmospheric correction via dark pixel subtraction was applied. Finally, depth invariant indices were calculated for each location and associated parameters recorded. Findings showed how robust the resulting parameters (deep-water radiance, depth invariant constant, band radiance variance/covariance, and ratio of attenuation) were across all scenes. We then created false color composite images of the depth invariant indices for each location. We noted several artifacts within some sites in the form of patterns or striations that did not appear to be aligned with variations in subsurface ground cover types. Further research into depth surveys for these sites revealed depths consistent with one or more wavelengths fully attenuating. This result showed that our model framework is generalizing well but limited to the penetration depths due to wavelength attenuation. Finally, we compared the parameters associated with the depth invariant calculation which were consistent across most scenes and explained any outliers observed. We concluded that the depth invariant index framework can be deployed on a large scale for ground cover detection in shallow waters (less than 16.8m or 5.2m for three DII measurements).

  15. Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules

    PubMed Central

    La Camera, Giancarlo; Richmond, Barry J.

    2008-01-01

    It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. PMID:18688266

  16. Modeling the violation of reward maximization and invariance in reinforcement schedules.

    PubMed

    La Camera, Giancarlo; Richmond, Barry J

    2008-08-08

    It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.

  17. Gauge-invariance and infrared divergences in the luminosity distance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch

    2017-04-01

    Measurements of the luminosity distance have played a key role in discovering the late-time cosmic acceleration. However, when accounting for inhomogeneities in the Universe, its interpretation has been plagued with infrared divergences in its theoretical predictions, which are in some cases used to explain the cosmic acceleration without dark energy. The infrared divergences in most calculations are artificially removed by imposing an infrared cut-off scale. We show that a gauge-invariant calculation of the luminosity distance is devoid of such divergences and consistent with the equivalence principle, eliminating the need to impose a cut-off scale. We present proper numerical calculations ofmore » the luminosity distance using the gauge-invariant expression and demonstrate that the numerical results with an ad hoc cut-off scale in previous calculations have negligible systematic errors as long as the cut-off scale is larger than the horizon scale. We discuss the origin of infrared divergences and their cancellation in the luminosity distance.« less

  18. A permutationally invariant full-dimensional ab initio potential energy surface for the abstraction and exchange channels of the H + CH{sub 4} system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun, E-mail: jli15@cqu.edu.cn, E-mail: zhangdh@dicp.ac.cn; Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131; Chen, Jun

    2015-05-28

    We report a permutationally invariant global potential energy surface (PES) for the H + CH{sub 4} system based on ∼63 000 data points calculated at a high ab initio level (UCCSD(T)-F12a/AVTZ) using the recently proposed permutation invariant polynomial-neural network method. The small fitting error (5.1 meV) indicates a faithful representation of the ab initio points over a large configuration space. The rate coefficients calculated on the PES using tunneling corrected transition-state theory and quasi-classical trajectory are found to agree well with the available experimental and previous quantum dynamical results. The calculated total reaction probabilities (J{sub tot} = 0) including themore » abstraction and exchange channels using the new potential by a reduced dimensional quantum dynamic method are essentially the same as those on the Xu-Chen-Zhang PES [Chin. J. Chem. Phys. 27, 373 (2014)].« less

  19. Position, rotation, and intensity invariant recognizing method

    DOEpatents

    Ochoa, Ellen; Schils, George F.; Sweeney, Donald W.

    1989-01-01

    A method for recognizing the presence of a particular target in a field of view which is target position, rotation, and intensity invariant includes the preparing of a target-specific invariant filter from a combination of all eigen-modes of a pattern of the particular target. Coherent radiation from the field of view is then imaged into an optical correlator in which the invariant filter is located. The invariant filter is rotated in the frequency plane of the optical correlator in order to produce a constant-amplitude rotational response in a correlation output plane when the particular target is present in the field of view. Any constant response is thus detected in the output The U.S. Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the U.S. Department of Energy and AT&T Technologies, Inc.

  20. Sampling Technique for Robust Odorant Detection Based on MIT RealNose Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2012-01-01

    This technique enhances the detection capability of the autonomous Real-Nose system from MIT to detect odorants and their concentrations in noisy and transient environments. The lowcost, portable system with low power consumption will operate at high speed and is suited for unmanned and remotely operated long-life applications. A deterministic mathematical model was developed to detect odorants and calculate their concentration in noisy environments. Real data from MIT's NanoNose was examined, from which a signal conditioning technique was proposed to enable robust odorant detection for the RealNose system. Its sensitivity can reach to sub-part-per-billion (sub-ppb). A Space Invariant Independent Component Analysis (SPICA) algorithm was developed to deal with non-linear mixing that is an over-complete case, and it is used as a preprocessing step to recover the original odorant sources for detection. This approach, combined with the Cascade Error Projection (CEP) Neural Network algorithm, was used to perform odorant identification. Signal conditioning is used to identify potential processing windows to enable robust detection for autonomous systems. So far, the software has been developed and evaluated with current data sets provided by the MIT team. However, continuous data streams are made available where even the occurrence of a new odorant is unannounced and needs to be noticed by the system autonomously before its unambiguous detection. The challenge for the software is to be able to separate the potential valid signal from the odorant and from the noisy transition region when the odorant is just introduced.

  1. Detecting Differential Item Discrimination (DID) and the Consequences of Ignoring DID in Multilevel Item Response Models

    ERIC Educational Resources Information Center

    Lee, Woo-yeol; Cho, Sun-Joo

    2017-01-01

    Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…

  2. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  3. Trial-Level Regressor Modulation for Functional Magnetic Resonance Imaging Designs Requiring Strict Periodicity of Stimulus Presentations: Illustrated Using a Go/No-Go Task

    PubMed Central

    Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John

    2017-01-01

    Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go/no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time–based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time–based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations. PMID:29276390

  4. Trial-Level Regressor Modulation for Functional Magnetic Resonance Imaging Designs Requiring Strict Periodicity of Stimulus Presentations: Illustrated Using a Go/No-Go Task.

    PubMed

    Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John

    2017-01-01

    Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go / no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time-based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time-based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations.

  5. Crack Detection in Concrete Tunnels Using a Gabor Filter Invariant to Rotation.

    PubMed

    Medina, Roberto; Llamas, José; Gómez-García-Bermejo, Jaime; Zalama, Eduardo; Segarra, Miguel José

    2017-07-20

    In this article, a system for the detection of cracks in concrete tunnel surfaces, based on image sensors, is presented. Both data acquisition and processing are covered. Linear cameras and proper lighting are used for data acquisition. The required resolution of the camera sensors and the number of cameras is discussed in terms of the crack size and the tunnel type. Data processing is done by applying a new method called Gabor filter invariant to rotation, allowing the detection of cracks in any direction. The parameter values of this filter are set by using a modified genetic algorithm based on the Differential Evolution optimization method. The detection of the pixels belonging to cracks is obtained to a balanced accuracy of 95.27%, thus improving the results of previous approaches.

  6. People vs. Collins: Statistics as a Two-Edged Sword

    ERIC Educational Resources Information Center

    McGivney-Burelle, Jean; McGivney, Katherine; McGivney, Ray

    2006-01-01

    Real-life applications of the use (and misuse) of mathematics invariably pique students' interest. This article describes a legal case in California that occurred in the 1960s in which a couple was convicted of robbery, in part, based on the expert testimony of a statistics instructor. On appeal, the judge noted several mathematical errors in this…

  7. Position, rotation, and intensity invariant recognizing method

    DOEpatents

    Ochoa, E.; Schils, G.F.; Sweeney, D.W.

    1987-09-15

    A method for recognizing the presence of a particular target in a field of view which is target position, rotation, and intensity invariant includes the preparing of a target-specific invariant filter from a combination of all eigen-modes of a pattern of the particular target. Coherent radiation from the field of view is then imaged into an optical correlator in which the invariant filter is located. The invariant filter is rotated in the frequency plane of the optical correlator in order to produce a constant-amplitude rotational response in a correlation output plane when the particular target is present in the field of view. Any constant response is thus detected in the output plane to determine whether a particular target is present in the field of view. Preferably, a temporal pattern is imaged in the output plane with a optical detector having a plurality of pixels and a correlation coefficient for each pixel is determined by accumulating the intensity and intensity-square of each pixel. The orbiting of the constant response caused by the filter rotation is also preferably eliminated either by the use of two orthogonal mirrors pivoted correspondingly to the rotation of the filter or the attaching of a refracting wedge to the filter to remove the offset angle. Detection is preferably performed of the temporal pattern in the output plane at a plurality of different angles with angular separation sufficient to decorrelate successive frames. 1 fig.

  8. Error tracking control for underactuated overhead cranes against arbitrary initial payload swing angles

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2017-02-01

    This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.

  9. High Energy Astrophysics Tests of Lorentz Invariance and Quantum Gravity Models

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2011-01-01

    High-energy astrophysics observations provide the best possibilities to detect a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of approximately 10-35 m. I will discuss here the possible signatures of Lorentz invariance violation (LIV) from observations of the spectra, polarization, and timing of gamma-rays from active galactic nuclei and gamma-ray bursts. Other sensitive tests are provided by observations ofthe spectra of ultrahigh energy cosmic rays and neutrinos. Using the latest data from the Pierre Auger Observatory one can already derive an upper limit of 4.5 x 10(exp -23) to the amount of LIV at a proton Lorentz factor of -2 x 10(exp 11). This result has fundamental implications for quantum gravity models. I will also discuss the possibilities of using more sensitive space based detection techniques to improve searches for LIV in the future.

  10. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  11. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  12. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

    PubMed Central

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-01-01

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533

  13. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.

    PubMed

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-09-10

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

  14. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.

    2016-01-01

    In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.

  15. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  16. Deep generative learning of location-invariant visual word recognition.

    PubMed

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.

  17. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  18. The Shame and Guilt Scales of the Test of Self-Conscious Affect-Adolescent (TOSCA-A): Factor Structure, Concurrent and Discriminant Validity, and Measurement and Structural Invariance Across Ratings of Males and Females.

    PubMed

    Watson, Shaun; Gomez, Rapson; Gullone, Eleonora

    2017-06-01

    This study examined various psychometric properties of the items comprising the shame and guilt scales of the Test of Self-Conscious Affect-Adolescent. A total of 563 adolescents (321 females and 242 males) completed these scales, and also measures of depression and empathy. Confirmatory factor analysis provided support for an oblique two-factor model, with the originally proposed shame and guilt items comprising shame and guilt factors, respectively. Also, shame correlated with depression positively and had no relation with empathy. Guilt correlated with depression negatively and with empathy positively. Thus, there was support for the convergent and discriminant validity of the shame and guilt factors. Multiple-group confirmatory factor analysis comparing females and males, based on the chi-square difference test, supported full metric invariance, the intercept invariance of 26 of the 30 shame and guilt items, and higher latent mean scores among females for both shame and guilt. Comparisons based on the difference in root mean squared error of approximation values supported full measurement invariance and no gender difference for latent mean scores. The psychometric and practical implications of the findings are discussed.

  19. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  20. Directly detecting isospin-violating dark matter

    NASA Astrophysics Data System (ADS)

    Kelso, Chris; Kumar, Jason; Marfatia, Danny; Sandick, Pearl

    2018-03-01

    We consider the prospects for multiple dark matter direct detection experiments to determine if the interactions of a dark matter candidate are isospin-violating. We focus on theoretically well-motivated examples of isospin-violating dark matter (IVDM), including models in which dark matter interactions with nuclei are mediated by a dark photon, a Z , or a squark. We determine that the best prospects for distinguishing IVDM from the isospin-invariant scenario arise in the cases of dark photon-or Z -mediated interactions, and that the ideal experimental scenario would consist of large exposure xenon- and neon-based detectors. If such models just evade current direct detection limits, then one could distinguish such models from the standard isospin-invariant case with two detectors with of order 100 ton-year exposure.

  1. Factorial invariance of pediatric patient self-reported fatigue across age and gender: a multigroup confirmatory factor analysis approach utilizing the PedsQL™ Multidimensional Fatigue Scale.

    PubMed

    Varni, James W; Beaujean, A Alexander; Limbers, Christine A

    2013-11-01

    In order to compare multidimensional fatigue research findings across age and gender subpopulations, it is important to demonstrate measurement invariance, that is, that the items from an instrument have equivalent meaning across the groups studied. This study examined the factorial invariance of the 18-item PedsQL™ Multidimensional Fatigue Scale items across age and gender and tested a bifactor model. Multigroup confirmatory factor analysis (MG-CFA) was performed specifying a three-factor model across three age groups (5-7, 8-12, and 13-18 years) and gender. MG-CFA models were proposed in order to compare the factor structure, metric, scalar, and error variance across age groups and gender. The analyses were based on 837 children and adolescents recruited from general pediatric clinics, subspecialty clinics, and hospitals in which children were being seen for well-child checks, mild acute illness, or chronic illness care. A bifactor model of the items with one general factor influencing all the items and three domain-specific factors representing the General, Sleep/Rest, and Cognitive Fatigue domains fit the data better than oblique factor models. Based on the multiple measures of model fit, configural, metric, and scalar invariance were found for almost all items across the age and gender groups, as was invariance in the factor covariances. The PedsQL™ Multidimensional Fatigue Scale demonstrated strict factorial invariance for child and adolescent self-report across gender and strong factorial invariance across age subpopulations. The findings support an equivalent three-factor structure across the age and gender groups studied. Based on these data, it can be concluded that pediatric patients across the groups interpreted the items in a similar manner regardless of their age or gender, supporting the multidimensional factor structure interpretation of the PedsQL™ Multidimensional Fatigue Scale.

  2. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  3. Ship detection based on rotation-invariant HOG descriptors for airborne infrared images

    NASA Astrophysics Data System (ADS)

    Xu, Guojing; Wang, Jinyan; Qi, Shengxiang

    2018-03-01

    Infrared thermal imagery is widely used in various kinds of aircraft because of its all-time application. Meanwhile, detecting ships from infrared images attract lots of research interests in recent years. In the case of downward-looking infrared imagery, in order to overcome the uncertainty of target imaging attitude due to the unknown position relationship between the aircraft and the target, we propose a new infrared ship detection method which integrates rotation invariant gradient direction histogram (Circle Histogram of Oriented Gradient, C-HOG) descriptors and the support vector machine (SVM) classifier. In details, the proposed method uses HOG descriptors to express the local feature of infrared images to adapt to changes in illumination and to overcome sea clutter effects. Different from traditional computation of HOG descriptor, we subdivide the image into annular spatial bins instead of rectangle sub-regions, and then Radial Gradient Transform (RGT) on the gradient is applied to achieve rotation invariant histogram information. Considering the engineering application of airborne and real-time requirements, we use SVM for training ship target and non-target background infrared sample images to discriminate real ships from false targets. Experimental results show that the proposed method has good performance in both the robustness and run-time for infrared ship target detection with different rotation angles.

  4. Factor Structure and Measurement Invariance of the Cognitive Failures Questionnaire across the Adult Life Span

    ERIC Educational Resources Information Center

    Rast, Philippe; Zimprich, Daniel; Van Boxtel, Martin; Jolles, Jellemer

    2009-01-01

    The Cognitive Failures Questionnaire (CFQ) is designed to assess a person's proneness to committing cognitive slips and errors in the completion of everyday tasks. Although the CFQ is a widely used instrument, its factor structure remains an issue of scientific debate. The present study used data of a representative sample (N = 1,303, 24-83 years…

  5. A mathematical theory of learning control for linear discrete multivariable systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Longman, Richard W.

    1988-01-01

    When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.

  6. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  7. On the relationship between aerosol content and errors in telephotometer experiments.

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.

    1971-01-01

    This paper presents an invariant imbedding theory of multiple scattering phenomena contributing to errors in telephotometer experiments. The theory indicates that there is a simple relationship between the magnitudes of the errors introduced by successive orders of scattering and it is shown that for all optical thicknesses each order can be represented by a coefficient which depends on the field of view of the telescope and the properties of the scattering medium. The verification of the theory and the derivation of the coefficients have been accomplished by a Monte Carlo program. Both monodisperse and polydisperse systems of Mie scatterers have been treated. The results demonstrate that for a given optical thickness the coefficients increase strongly with the mean particle size particularly for the smaller fields of view.

  8. On Statistical Analysis of Neuroimages with Imperfect Registration

    PubMed Central

    Kim, Won Hwa; Ravi, Sathya N.; Johnson, Sterling C.; Okonkwo, Ozioma C.; Singh, Vikas

    2016-01-01

    A variety of studies in neuroscience/neuroimaging seek to perform statistical inference on the acquired brain image scans for diagnosis as well as understanding the pathological manifestation of diseases. To do so, an important first step is to register (or co-register) all of the image data into a common coordinate system. This permits meaningful comparison of the intensities at each voxel across groups (e.g., diseased versus healthy) to evaluate the effects of the disease and/or use machine learning algorithms in a subsequent step. But errors in the underlying registration make this problematic, they either decrease the statistical power or make the follow-up inference tasks less effective/accurate. In this paper, we derive a novel algorithm which offers immunity to local errors in the underlying deformation field obtained from registration procedures. By deriving a deformation invariant representation of the image, the downstream analysis can be made more robust as if one had access to a (hypothetical) far superior registration procedure. Our algorithm is based on recent work on scattering transform. Using this as a starting point, we show how results from harmonic analysis (especially, non-Euclidean wavelets) yields strategies for designing deformation and additive noise invariant representations of large 3-D brain image volumes. We present a set of results on synthetic and real brain images where we achieve robust statistical analysis even in the presence of substantial deformation errors; here, standard analysis procedures significantly under-perform and fail to identify the true signal. PMID:27042168

  9. Three-dimensional object recognitions from two-dimensional images using wavelet transforms and neural networks

    NASA Astrophysics Data System (ADS)

    Deschenes, Sylvain; Sheng, Yunlong; Chevrette, Paul C.

    1998-03-01

    3D object classification from 2D IR images is shown. The wavelet transform is used for edge detection. Edge tracking is used for removing noise effectively int he wavelet transform. The invariant Fourier descriptor is used to describe the contour curves. Invariance under out-of-plane rotation is achieved by the feature space trajectory neural network working as a classifier.

  10. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  11. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  12. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  13. The criterion for time symmetry of probabilistic theories and the reversibility of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Holster, A. T.

    2003-10-01

    Physicists routinely claim that the fundamental laws of physics are 'time symmetric' or 'time reversal invariant' or 'reversible'. In particular, it is claimed that the theory of quantum mechanics is time symmetric. But it is shown in this paper that the orthodox analysis suffers from a fatal conceptual error, because the logical criterion for judging the time symmetry of probabilistic theories has been incorrectly formulated. The correct criterion requires symmetry between future-directed laws and past-directed laws. This criterion is formulated and proved in detail. The orthodox claim that quantum mechanics is reversible is re-evaluated. The property demonstrated in the orthodox analysis is shown to be quite distinct from time reversal invariance. The view of Satosi Watanabe that quantum mechanics is time asymmetric is verified, as well as his view that this feature does not merely show a de facto or 'contingent' asymmetry, as commonly supposed, but implies a genuine failure of time reversal invariance of the laws of quantum mechanics. The laws of quantum mechanics would be incompatible with a time-reversed version of our universe.

  14. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  15. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Dissociative Global and Local Task-Switching Costs Across Younger Adults, Middle-Aged Adults, Older Adults, and Very Mild Alzheimer Disease Individuals

    PubMed Central

    Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.

    2015-01-01

    A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720

  17. Mass spectrometric approaches to detecting prions and protein conformers

    USDA-ARS?s Scientific Manuscript database

    Transmissible spongiform encephalopathies (TSEs) can cause substantial economic damage to agriculture. These diseases have characteristically long incubation periods, comparatively short symptomatic intervals, and are invariably fatal. Early detection is important in controlling these diseases. Howe...

  18. Four and Five-body non-local correlations in pure and mixed states

    NASA Astrophysics Data System (ADS)

    Sharma, Santosh Shelly; Sharma, Naresh Kumar

    2014-03-01

    In our earlier works, quantifiers of four and three-body correlations based on four qubit invariants had been constructed for pure states. The principal construction tools, local unitary invariance and notion of negativity fonts, make it possible to outline the process of selective construction of meaningful invariants that quanify N and N - 1 qubit correlations. It is found that, in general, starting from degree k invariants relevant to detection and quantifcation of specific type of non-local quantum correlations in (N - 1) (N > 2) qubit system, one can construct degree k coefficients of an N-qubit bilinear form. When k =2 N - 2 (N > 2), one of the invariants of degree 2 N - 1 quantifies N-body non-local correlations The process is recursive. While for few body systems it yields analytical expressions in terms of functions of state coefficients, for larger systems it can be the guiding principle to numerical caculations of invariants. To illustrate the process, an expression for a five qubit correlation quantifier for pure states is constructed. In addition, the extension to specific rank two mixed states through convex-roof extension is investigated. We gratefully acknowledge Financial support from CNPq Brazil and Fundacao Araucaria PR Brazil.

  19. High Energy Astrophysics Tests of Lorentz Invariance and Quantum Gravity Models

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.

    2011-01-01

    High energy astrophysics observations provide the best possibilities to detect a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of approximately 10(exp -35)m. I will discuss the possible signatures of Lorentz invariance violation (LIV) that can be manifested by observing of the spectra, polarization, and timing of gamma-rays from active galactic nuclei and y-ray bursts. Other sensitive tests are provided by observations of the spectra of ultrahigh energy cosmic rays and neutrinos. Using the latest data from the Pierre Auger Observatory one can already derive an upper limit of 4.5 x 10(exp -23) on the fraction of LIV at a Lorentz factor of approximately 2 x 10(exp 11). This result has fundamental implications for quantum gravity models. I will also discuss the possibilities of using more sensitive space-based detection techniques to improve searches for LIV in the future.

  20. Gamma-Ray, Cosmic Ray and Neutrino Tests of Lorentz Invariance and Quantum Gravity Models

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd

    2011-01-01

    High-energy astrophysics observations provide the best possibilities to detect a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of approximately 10(exp -35) m. I will discuss here the possible signatures of Lorentz invariance violation (LIV) from observations of the spectra, polarization, and timing of gamma-rays from active galactic nuclei and gamma-ray bursts. Other sensitive tests are provided by observations of the spectra of ultrahigh energy cosmic rays and neutrinos. Using the latest data from the Pierre Auger Observatory one can already derive an upper limit of 4.5 x 10(exp -23) to the amount of LIV of at a proton Lorentz factor of approximately 2 x 10(exp 11). This result has fundamental implications for quantum gravity models. I will also discuss the possibilities of using more sensitive space based detection techniques to improve searches for LIV in the future.

  1. Arabic sign language recognition based on HOG descriptor

    NASA Astrophysics Data System (ADS)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  2. Permutation coding technique for image recognition systems.

    PubMed

    Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel

    2006-11-01

    A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.

  3. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  4. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  5. CMB in the river frame and gauge invariance at second order

    NASA Astrophysics Data System (ADS)

    Roldan, Omar

    2018-03-01

    Gauge invariance: the Sachs-Wolfe formula describing the Cosmic Microwave Background (CMB) temperature anisotropies is one of the most important relations in cosmology. Despite its importance, the gauge invariance of this formula has only been discussed at first order. Here we discuss the subtle issue of second-order gauge transformations on the CMB. By introducing two rules (needed to handle the subtle issues), we prove the gauge invariance of the second-order Sachs-Wolfe formula and provide several compact expressions which can be useful for the study of gauge transformations on cosmology. Our results go beyond a simple technicality: we discuss from a physical point of view several aspects that improve our understanding of the CMB. We also elucidate how crucial it is to understand gauge transformations on the CMB in order to avoid errors and/or misconceptions as occurred in the past. The river frame: we introduce a cosmological frame which we call the river frame. In this frame, photons and any object can be thought as fishes swimming in the river and relations are easily expressed in either the metric or the covariant formalism then ensuring a transparent geometric meaning. Finally, our results show that the river frame is useful to make perturbative and non-perturbative analysis. In particular, it was already used to obtain the fully nonlinear generalization of the Sachs-Wolfe formula and is used here to describe second-order perturbations.

  6. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  7. Error Type and Lexical Frequency Effects: Error Detection in Swedish Children with Language Impairment

    ERIC Educational Resources Information Center

    Hallin, Anna Eva; Reuterskiöld, Christina

    2017-01-01

    Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…

  8. Analysis of the impact of error detection on computer performance

    NASA Technical Reports Server (NTRS)

    Shin, K. C.; Lee, Y. H.

    1983-01-01

    Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

  9. Testing For Measurement Invariance of Attachment Across Chinese and American Adolescent Samples.

    PubMed

    Ren, Ling; Zhao, Jihong Solomon; He, Ni Phil; Marshall, Ineke Haen; Zhang, Hongwei; Zhao, Ruohui; Jin, Cheng

    2016-06-01

    Adolescent attachment to formal and informal institutions has emerged as a major focus of criminological theories since the publication of Hirschi's work in 1969. This study attempts to examine the psychometric equivalence of the factorial structure of attachment measures across nations reflecting Western and Eastern cultures. Twelve manifest variables are used tapping the concepts of adolescent attachment to parents, school, and neighborhood. Confirmatory factor analysis is used to conduct invariance test across approximately 3,000 Chinese and U.S. adolescents. Results provide strong support for a three-factor model; the multigroup invariance tests reveal mixed results. While the family attachment measure appears invariant between the two samples, significant differences in the coefficients of the factor loadings are detected in the school attachment and neighborhood attachment measures. The results of regression analyses lend support to the predictive validity of three types of attachment. Finally, the limitations of the study are discussed. © The Author(s) 2015.

  10. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  11. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  12. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  13. Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

    NASA Astrophysics Data System (ADS)

    Ozden, Mehmet Tahir

    2015-12-01

    An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

  14. Parity in knot theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manturov, Vassily O

    2010-06-29

    In this work we study knot theories with a parity property for crossings: every crossing is declared to be even or odd according to a certain preassigned rule. If this rule satisfies a set of simple axioms related to the Reidemeister moves, then certain simple invariants solving the minimality problem can be defined, and invariant maps on the set of knots can be constructed. The most important example of a knot theory with parity is the theory of virtual knots. Using the parity property arising from Gauss diagrams we show that even a gross simplification of the theory of virtualmore » knots, namely, the theory of free knots, admits simple and highly nontrivial invariants. This gives a solution to a problem of Turaev, who conjectured that all free knots are trivial. In this work we show that free knots are generally not invertible, and provide invariants which detect the invertibility of free knots. The passage to ordinary virtual knots allows us to strengthen known invariants (such as the Kauffman bracket) using parity considerations. We also discuss other examples of knot theories with parity. Bibliography: 27 items.« less

  15. Modular invariant inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Tatsuo; Nitta, Daisuke; Urakawa, Yuko

    2016-08-08

    Modular invariance is a striking symmetry in string theory, which may keep stringy corrections under control. In this paper, we investigate a phenomenological consequence of the modular invariance, assuming that this symmetry is preserved as well as in a four dimensional (4D) low energy effective field theory. As a concrete setup, we consider a modulus field T whose contribution in the 4D effective field theory remains invariant under the modular transformation and study inflation drived by T. The modular invariance restricts a possible form of the scalar potenntial. As a result, large field models of inflation are hardly realized. Meanwhile,more » a small field model of inflation can be still accomodated in this restricted setup. The scalar potential traced during the slow-roll inflation mimics the hilltop potential V{sub ht}, but it also has a non-negligible deviation from V{sub ht}. Detecting the primordial gravitational waves predicted in this model is rather challenging. Yet, we argue that it may be still possible to falsify this model by combining the information in the reheating process which can be determined self-completely in this setup.« less

  16. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills.

    PubMed

    Jung, Yoojin; Han, Byunghyun; Mostafid, M Erfan; Chiu, Pei; Yazdani, Ramin; Imhoff, Paul T

    2012-02-01

    Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobility and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF(6)), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Chirp-Z analysis for sol-gel transition monitoring.

    PubMed

    Martinez, Loïc; Caplain, Emmanuel; Serfaty, Stéphane; Griesmar, Pascal; Gouedard, Gérard; Gindre, Marcel

    2004-04-01

    Gelation is a complex reaction that transforms a liquid medium into a solid one: the gel. In gel state, some gel materials (DMAP) have the singular property to ring in an audible frequency range when a pulse is applied. Before the gelation point, there is no transmission of slow waves observed; after the gelation point, the speed of sound in the gel rapidly increases from 0.1 to 10 m/s. The time evolution of the speed of sound can be measured, in frequency domain, by following the frequency spacing of the resonance peaks from the Synchronous Detection (SD) measurement method. Unfortunately, due to a constant frequency sampling rate, the relative error for low speeds (0.1 m/s) is 100%. In order to maintain a low constant relative error, in the whole speed time evolution range, Chirp-Z Transform (CZT) is used. This operation transforms a time variant signal to a time invariant one using only a time dependant stretching factor (S). In the frequency domain, the CZT enables us to stretch each collected spectrum from time signals. The blind identification of the S factor gives us the complete time evolution law of the speed of sound. Moreover, this method proves that the frequency bandwidth follows the same time law. These results point out that the minimum wavelength stays constant and that it only depends on the gel.

  19. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  20. Practical robustness measures in multivariable control system analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.

    1981-01-01

    The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

  1. Pedestrian detection in crowded scenes with the histogram of gradients principle

    NASA Astrophysics Data System (ADS)

    Sidla, O.; Rosner, M.; Lypetskyy, Y.

    2006-10-01

    This paper describes a close to real-time scale invariant implementation of a pedestrian detector system which is based on the Histogram of Oriented Gradients (HOG) principle. Salient HOG features are first selected from a manually created very large database of samples with an evolutionary optimization procedure that directly trains a polynomial Support Vector Machine (SVM). Real-time operation is achieved by a cascaded 2-step classifier which uses first a very fast linear SVM (with the same features as the polynomial SVM) to reject most of the irrelevant detections and then computes the decision function with a polynomial SVM on the remaining set of candidate detections. Scale invariance is achieved by running the detector of constant size on scaled versions of the original input images and by clustering the results over all resolutions. The pedestrian detection system has been implemented in two versions: i) fully body detection, and ii) upper body only detection. The latter is especially suited for very busy and crowded scenarios. On a state-of-the-art PC it is able to run at a frequency of 8 - 20 frames/sec.

  2. COMPI Fertility Problem Stress Scales is a brief, valid and reliable tool for assessing stress in patients seeking treatment.

    PubMed

    Sobral, Maria P; Costa, Maria E; Schmidt, Lone; Martins, Mariana V

    2017-02-01

    Are the Copenhagen Multi-Centre Psychosocial Infertility research program Fertility Problem Stress Scales (COMPI-FPSS) a reliable and valid measure across gender and culture? The COMPI-FPSS is a valid and reliable measure, presenting excellent or good fit in the majority of the analyzed countries, and demonstrating full invariance across genders and partial invariance across cultures. Cross-cultural and gender validation is needed to consider a measure as standard care within fertility. The present study is the first attempting to establish comparability of fertility-related stress across genders and countries. Cross-sectional study. First, we tested the structure of the COMPI-FPSS. Then, reliability and validity (convergent and discriminant) were examined for the final model. Finally, measurement invariance both across genders and cultures was tested. Our final sample had 3923 fertility patients (1691 men and 2232 women) recruited in clinical settings from seven different countries: Denmark, China, Croatia, Germany, Greece, Hungary and Sweden. Participants had a mean age of 34 years and the majority (84%) were childless. Findings confirmed the original three-factor structure of the COMPI-FPSS, although suggesting a shortened measurement model using less items that fitted the data better than the full version model. While data from the Chinese and Croatian subsamples did not fit, all other counties presented good fit (χ 2 /df ≤ 5.4; comparative fit index ≥ 0.94; root-mean-square error of approximation ≤ 0.07; modified expected cross-validation index ≤ 0.77). In general, reliability, convergent validity, and discriminant validity were observed in all subscales from each country (composite reliability ≥ 0.63; average variance extracted ≥ 0.38; squared correlation ≥ 0.13). Full invariance was established across genders, and partial invariance was demonstrated across countries. Generalizability regarding the validation of the COMPI-FPSS cannot be made regarding infertile individuals not seeking treatment, or non-European patients. This study did not investigate predictive validity, and hence the capability of this instrument in detecting changes in fertility-specific adjustment over time and predicting the psychological impact needs to be established in future research. Besides extending knowledge on the psychometric properties of one of the most used fertility stress questionnaire, this study demonstrates both research and clinical usefulness of the COMPI-FPSS. This study was supported by European Union Funds (FEDER/COMPETE-Operational Competitiveness Program, and by national funds (FCT-Portuguese Foundation for Science and Technology) under the projects PTDC/MHC-PSC/4195/2012 and SFRH/BPD/85789/2012). There are no conflicts of interest to declare. N/A. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  4. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  5. Psychometric Properties of the Multidimensional Pain Inventory Applied to Brazilian Patients with Orofacial Pain.

    PubMed

    Zucoloto, Miriane Lucindo; Maroco, João; Duarte Bonini Campos, Juliana Alvares

    2015-01-01

    To evaluate the psychometric properties of the Multidimensional Pain Inventory (MPI) in a Brazilian sample of patients with orofacial pain. A total of 1,925 adult patients, who sought dental care in the School of Dentistry of São Paulo State University's Araraquara campus, were invited to participate; 62.5% (n=1,203) agreed to participate. Of these, 436 presented with orofacial pain and were included. The mean age was 39.9 (SD=13.6) years and 74.5% were female. Confirmatory factor analysis was conducted using χ²/df, comparative fit index, goodness of fit index, and root mean square error of approximation as indices of goodness of fit. Convergent validity was estimated by the average variance extracted and composite reliability, and internal consistency by Cronbach's alpha standardized coefficient (α). The stability of the models was tested in independent samples (test and validation; dental pain and orofacial pain). The factorial invariance was estimated by multigroup analysis (Δχ²). Factorial, convergent validity, and internal consistency were adequate in all three parts of the MPI. To achieve this adequate fit for Part 1, item 15 needed to be deleted (λ=0.13). Discriminant validity was compromised between the factors "activities outside the home" and "social activities" of Part 3 of the MPI in the total sample, validation sample, and in patients with dental pain and with orofacial pain. A strong invariance between different subsamples from the three parts of the MPI was detected. The MPI produced valid, reliable, and stable data for pain assessment among Brazilian patients with orofacial pain.

  6. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  7. Automorphic Forms and Mock Modular Forms in String Theory

    NASA Astrophysics Data System (ADS)

    Nazaroglu, Caner

    We study a variety of modular invariant objects in relation to string theory. First, we focus on Jacobi forms over generic rank lattices and Siegel forms that appear in N = 2, D = 4 compactifications of heterotic string with Wilson lines. Constraints from low energy spectrum and modularity are employed to deduce the relevant supersymmetric partition functions entirely. This procedure is applied on models that lead to Jacobi forms of index 3, 4, 5 as well as Jacobi forms over root lattices A2 and A3. These computations are then checked against an explicit orbifold model which can be Higgsed to the models under question. Models with a single Wilson line are then studied in detail with their relation to paramodular group Gammam as T-duality group made explicit. These results on the heterotic string side are then turned into predictions for geometric invariants using TypeII - Heterotic duality. Secondly, we study theta functions for indenite signature lattices of generic signature. Building on results in literature for signature (n-1,1) and (n-2,2) lattices, we work out the properties of generalized error functions which we call r-tuple error functions. We then use these functions to build such indenite theta functions and describe their modular completions.

  8. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  9. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  10. Are galaxy distributions scale invariant? A perspective from dynamical systems theory

    NASA Astrophysics Data System (ADS)

    McCauley, J. L.

    2002-06-01

    Unless there is an evidence for fractal scaling with a single exponent over distances 0.1<=r<=100h-1Mpc, then the widely accepted notion of scale invariance of the correlation integral for 0.1<=r<=10h-1Mpc must be questioned. The attempt to extract a scaling exponent /ν from the correlation integral /n(r) by plotting /log(n(r)) vs. /log(r) is unreliable unless the underlying point set is approximately monofractal. The extraction of a spectrum of generalized dimensions νq from a plot of the correlation integral generating function Gn(q) by a similar procedure is probably an indication that Gn(q) does not scale at all. We explain these assertions after defining the term multifractal, mutually inconsistent definitions having been confused together in the cosmology literature. Part of this confusion is traced to the confusion in interpreting a measure-theoretic formula written down by Hentschel and Procaccia in the dynamical systems theory literature, while other errors follow from confusing together entirely different definitions of multifractal from two different schools of thought. Most important are serious errors in data analysis that follow from taking for granted a largest term approximation that is inevitably advertised in the literature on both fractals and dynamical systems theory.

  11. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition

    PubMed Central

    Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096

  12. Perceptual invariance of coarticulated vowels over variations in speaking rate.

    PubMed

    Stack, Janet W; Strange, Winifred; Jenkins, James J; Clarke, William D; Trent, Sonja A

    2006-04-01

    This study examined the perception and acoustics of a large corpus of vowels spoken in consonant-vowel-consonant syllables produced in citation-form (lists) and spoken in sentences at normal and rapid rates by a female adult. Listeners correctly categorized the speaking rate of sentence materials as normal or rapid (2% errors) but did not accurately classify the speaking rate of the syllables when they were excised from the sentences (25% errors). In contrast, listeners accurately identified the vowels produced in sentences spoken at both rates when presented the sentences and when presented the excised syllables blocked by speaking rate or randomized. Acoustical analysis showed that formant frequencies at syllable midpoint for vowels in sentence materials showed "target undershoot" relative to citation-form values, but little change over speech rate. Syllable durations varied systematically with vowel identity, speaking rate, and voicing of final consonant. Vowel-inherent-spectral-change was invariant in direction of change over rate and context for most vowels. The temporal location of maximum F1 frequency further differentiated spectrally adjacent lax and tense vowels. It was concluded that listeners were able to utilize these rate- and context-independent dynamic spectrotemporal parameters to identify coarticulated vowels, even when sentential information about speaking rate was not available.

  13. Integrated analysis of error detection and recovery

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1985-01-01

    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

  14. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  15. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in this case) to assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.

  16. Sensitivity in error detection of patient specific QA tools for IMRT plans

    NASA Astrophysics Data System (ADS)

    Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.

    2016-03-01

    The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.

  17. A Mechanism for Error Detection in Speeded Response Time Tasks

    ERIC Educational Resources Information Center

    Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.

    2005-01-01

    The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…

  18. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  19. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  20. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  1. On the validity of measuring change over time in routine clinical assessment: a close examination of item-level response shifts in psychosomatic inpatients.

    PubMed

    Nolte, S; Mierke, A; Fischer, H F; Rose, M

    2016-06-01

    Significant life events such as severe health status changes or intensive medical treatment often trigger response shifts in individuals that may hamper the comparison of measurements over time. Drawing from the Oort model, this study aims at detecting response shift at the item level in psychosomatic inpatients and evaluating its impact on the validity of comparing repeated measurements. Complete pretest and posttest data were available from 1188 patients who had filled out the ICD-10 Symptom Rating (ISR) scale at admission and discharge, on average 24 days after intake. Reconceptualization, reprioritization, and recalibration response shifts were explored applying tests of measurement invariance. In the item-level approach, all model parameters were constrained to be equal between pretest and posttest. If non-invariance was detected, these were linked to the different types of response shift. When constraining across-occasion model parameters, model fit worsened as indicated by a significant Satorra-Bentler Chi-square difference test suggesting potential presence of response shifts. A close examination revealed presence of two types of response shift, i.e., (non)uniform recalibration and both higher- and lower-level reconceptualization response shifts leading to four model adjustments. Our analyses suggest that psychosomatic inpatients experienced some response shifts during their hospital stay. According to the hierarchy of measurement invariance, however, only one of the detected non-invariances is critical for unbiased mean comparisons over time, which did not have a substantial impact on estimating change. Hence, the use of the ISR can be recommended for outcomes assessment in clinical routine, as change score estimates do not seem hampered by response shift effects.

  2. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  3. Searching for New Physics with Ultrahigh Energy Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.; Scully, Sean T.

    2009-01-01

    Ultrahigh energy cosmic rays that produce giant extensive showers of charged particles and photons when they interact in the Earth's atmosphere provide a unique tool to search for new physics. Of particular interest is the possibility of detecting a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of approximately 10 (exp -35) m. We discuss here the possible signature of Lorentz invariance violation on the spectrum of ultrahigh energy cosmic rays as compared with present observations of giant air showers. We also discuss the possibilities of using more sensitive detection techniques to improve searches for Lorentz invariance violation in the future. Using the latest data from the Pierre Auger Observatory, we derive a best fit to the LIV parameter of 3 .0 + 1.5 - 3:0 x 10 (exp -23) ,corresponding to an upper limit of 4.5 x 10-23 at a proton Lorentz factor of approximately 2 x 10(exp 11) . This result has fundamental implications for quantum gravity models.

  4. High Energy Astrophysics Tests of Lorentz Invariance and Quantum Gravity Models

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2012-01-01

    High energy astrophysics observations provide the best possibilities to detect a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of approx.10(exp -35) m. I will discuss the possible signatures of Lorentz invariance violation (LIV) that can be manifested by observing of the spectra, polarization, and timing of gamma-rays from active galactic nuclei and gamma-ray bursts. Other sensitive tests are provided by observations of the spectra of ultrahigh energy cosmic rays and neutrinos. Using the latest data from the Pierre Auger Observatory one can already derive an upper limit of 4.5 x 10(exp -23) on the fraction of LIV at a Lorentz factor of approx. 2 x 10(exp 11). This result has fundamental implications for quantum gravity models. I will also discuss the possibilities of using more sensitive space-based detection techniques to improve searches for LIV in the future. I will also discuss how the LIV formalism casts doubt on the OPERA superluminal neutrino claim.

  5. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  6. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  7. Lognormal kriging for the assessment of reliability in groundwater quality control observation networks

    USGS Publications Warehouse

    Candela, L.; Olea, R.A.; Custodio, E.

    1988-01-01

    Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.

  8. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  9. Invariant domain watermarking using heaviside function of order alpha and fractional Gaussian field.

    PubMed

    Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed

    2015-01-01

    Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness.

  10. Invariant Domain Watermarking Using Heaviside Function of Order Alpha and Fractional Gaussian Field

    PubMed Central

    Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed

    2015-01-01

    Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness. PMID:25884854

  11. iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.

  12. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  13. Scene incongruity and attention.

    PubMed

    Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John

    2017-02-01

    Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    NASA Astrophysics Data System (ADS)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  15. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  16. Iris recognition using possibilistic fuzzy matching on local features.

    PubMed

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  17. Continuum limit of Bk from 2+1 flavor domain wall QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soni, A.; T. Izubuchi, et al.

    2011-07-01

    We determine the neutral kaon mixing matrix element B{sub K} in the continuum limit with 2+1 flavors of domain wall fermions, using the Iwasaki gauge action at two different lattice spacings. These lattice fermions have near exact chiral symmetry and therefore avoid artificial lattice operator mixing. We introduce a significant improvement to the conventional nonperturbative renormalization (NPR) method in which the bare matrix elements are renormalized nonperturbatively in the regularization invariant momentum scheme (RI-MOM) and are then converted into the MS{sup -} scheme using continuum perturbation theory. In addition to RI-MOM, we introduce and implement four nonexceptional intermediate momentum schemesmore » that suppress infrared nonperturbative uncertainties in the renormalization procedure. We compute the conversion factors relating the matrix elements in this family of regularization invariant symmetric momentum schemes (RI-SMOM) and MS{sup -} at one-loop order. Comparison of the results obtained using these different intermediate schemes allows for a more reliable estimate of the unknown higher-order contributions and hence for a correspondingly more robust estimate of the systematic error. We also apply a recently proposed approach in which twisted boundary conditions are used to control the Symanzik expansion for off-shell vertex functions leading to a better control of the renormalization in the continuum limit. We control chiral extrapolation errors by considering both the next-to-leading order SU(2) chiral effective theory, and an analytic mass expansion. We obtain B{sub K}{sup MS{sup -}} (3 GeV) = 0.529(5){sub stat}(15){sub {chi}}(2){sub FV}(11){sub NPR}. This corresponds to B{sup -}{sub K}{sup RGI{sup -}} = 0.749(7){sub stat}(21){sub {chi}}(3){sub FV}(15){sub NPR}. Adding all sources of error in quadrature, we obtain B{sup -}{sub K}{sup RGI{sup -}} = 0.749(27){sub combined}, with an overall combined error of 3.6%.« less

  18. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less

  19. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Kalet, A; Smith, W

    2016-06-15

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less

  20. Elliptical Acoustic Particle Motion in Underwater Waveguides

    DTIC Science & Technology

    2013-03-27

    Folkert, ”Tracking sperm whales with a towed acoustic vector sensor,” J. Acoust. Soc. Am. Volume 128, Issue 5, pp. 2681-2694 (2010). 2 Santos, P...modal amplitudes Bm and Cm are weak functions of frequency and range independent. This holds for any normal mode description of the acoustic field in a...wavelengths. Error in measurement aside, the frequency range relation- ship described by the waveguide invariant holds for any directional component of I

  1. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  2. Local unitary invariants for N-qubit pure states

    NASA Astrophysics Data System (ADS)

    Sharma, S. Shelly; Sharma, N. K.

    2010-11-01

    The concept of negativity font, a basic unit of multipartite entanglement, is introduced. Transformation properties of determinants of negativity fonts under local unitary (LU) transformations are exploited to obtain relevant N-qubit polynomial invariants and construct entanglement monotones from first principles. It is shown that entanglement monotones that detect the entanglement of specific parts of the composite system may be constructed to distinguish between states with distinct types of entanglement. The structural difference between entanglement monotones for an odd and even number of qubits is brought out.

  3. Real-time spatio-temporal coherence estimation for autonomous mode identification and invariance tracking

    NASA Technical Reports Server (NTRS)

    Park, Han G. (Inventor); Zak, Michail (Inventor); James, Mark L. (Inventor); Mackey, Ryan M. E. (Inventor)

    2003-01-01

    A general method of anomaly detection from time-correlated sensor data is disclosed. Multiple time-correlated signals are received. Their cross-signal behavior is compared against a fixed library of invariants. The library is constructed during a training process, which is itself data-driven using the same time-correlated signals. The method is applicable to a broad class of problems and is designed to respond to any departure from normal operation, including faults or events that lie outside the training envelope.

  4. Permanence analysis of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.; Kasami, T.

    1983-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

  5. Probability of undetected error after decoding for a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  6. A theory of frequency domain invariants: spherical harmonic identities for BRDF/lighting transfer and image consistency.

    PubMed

    Mahajan, Dhruv; Ramamoorthi, Ravi; Curless, Brian

    2008-02-01

    This paper develops a theory of frequency domain invariants in computer vision. We derive novel identities using spherical harmonics, which are the angular frequency domain analog to common spatial domain invariants such as reflectance ratios. These invariants are derived from the spherical harmonic convolution framework for reflection from a curved surface. Our identities apply in a number of canonical cases, including single and multiple images of objects under the same and different lighting conditions. One important case we consider is two different glossy objects in two different lighting environments. For this case, we derive a novel identity, independent of the specific lighting configurations or BRDFs, that allows us to directly estimate the fourth image if the other three are available. The identity can also be used as an invariant to detecttampering in the images. While this paper is primarily theoretical, it has the potential to lay the mathematical foundations for two important practical applications. First, we can develop more general algorithms for inverse rendering problems, which can directly relight and change material properties by transferring the BRDF or lighting from another object or illumination. Second, we can check the consistency of an image, to detect tampering or image splicing.

  7. Two (or three) is one too many: testing the flexibility of contextual cueing with multiple target locations.

    PubMed

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J

    2011-10-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.

  8. The representational dynamics of remembered projectile locations.

    PubMed

    De Sá Teixeira, Nuno Alexandre; Hecht, Heiko; Oliveira, Armando Mónica

    2013-12-01

    When people are instructed to locate the vanishing location of a moving target, systematic errors forward in the direction of motion (M-displacement) and downward in the direction of gravity (O-displacement) are found. These phenomena came to be linked with the notion that physical invariants are embedded in the dynamic representations generated by the perceptual system. We explore the nature of these invariants that determine the representational mechanics of projectiles. By manipulating the retention intervals between the target's disappearance and the participant's responses, while measuring both M- and O-displacements, we were able to uncover a representational analogue of the trajectory of a projectile. The outcomes of three experiments revealed that the shape of this trajectory is discontinuous. Although the horizontal component of such trajectory can be accounted for by perceptual and oculomotor factors, its vertical component cannot. Taken together, the outcomes support an internalization of gravity in the visual representation of projectiles.

  9. Frequency-scanning interferometry using a time-varying Kalman filter for dynamic tracking measurements.

    PubMed

    Jia, Xingyu; Liu, Zhigang; Tao, Long; Deng, Zhongwen

    2017-10-16

    Frequency scanning interferometry (FSI) with a single external cavity diode laser (ECDL) and time-invariant Kalman filtering is an effective technique for measuring the distance of a dynamic target. However, due to the hysteresis of the piezoelectric ceramic transducer (PZT) actuator in the ECDL, the optical frequency sweeps of the ECDL exhibit different behaviors, depending on whether the frequency is increasing or decreasing. Consequently, the model parameters of Kalman filter appear time varying in each iteration, which produces state estimation errors with time-invariant filtering. To address this, in this paper, a time-varying Kalman filter is proposed to model the instantaneous movement of a target relative to the different optical frequency tuning durations of the ECDL. The combination of the FSI method with the time-varying Kalman filter was theoretically analyzed, and the simulation and experimental results show the proposed method greatly improves the performance of dynamic FSI measurements.

  10. LETTER: Test of Te profile invariance by sensitivity studies

    NASA Astrophysics Data System (ADS)

    Becker, G.

    1992-06-01

    The response of the electron temperature profile shape to variations of the electron heating and density profiles is investigated in different confinement regimes. It is shown that the changes in rTe = -Te/(dTe/dr) exceed the measurement error if the shape of the electron heat diffusivity χe(r) is kept fixed. The observed constancy of rTe(r) in the outer half of the plasma is incompatible with such a fixed χe(r) shape, i.e., a Te profile constraining mechanism must be present. Local transport laws of the form χe varies as rTe-α with α gtrsim 4 and χe propto (dTe/dr)α with α >= 2 yield the experimental stiffness of the Te(r) shape but conflict with empirical χe scalings. These results support the model of a self-organizing and adjusting χe(r) causing Te profile invariance

  11. X-ray Emission Line Anisotropy Effects on the Isoelectronic Temperature Measurement Method

    NASA Astrophysics Data System (ADS)

    Liedahl, Duane; Barrios, Maria; Brown, Greg; Foord, Mark; Gray, William; Hansen, Stephanie; Heeter, Robert; Jarrott, Leonard; Mauche, Christopher; Moody, John; Schneider, Marilyn; Widmann, Klaus

    2016-10-01

    Measurements of the ratio of analogous emission lines from isoelectronic ions of two elements form the basis of the isoelectronic method of inferring electron temperatures in laser-produced plasmas, with the expectation that atomic modeling errors cancel to first order. Helium-like ions are a common choice in many experiments. Obtaining sufficiently bright signals often requires sample sizes with non-trivial line optical depths. For lines with small destruction probabilities per scatter, such as the 1s2p-1s2 He-like resonance line, repeated scattering can cause a marked angular dependence in the escaping radiation. Isoelectronic lines from near-Z equimolar dopants have similar optical depths and similar angular variations, which leads to a near angular-invariance for their line ratios. Using Monte Carlo simulations, we show that possible ambiguities associated with anisotropy in deriving electron temperatures from X-ray line ratios are minimized by exploiting this isoelectronic invariance.

  12. Illumination Invariant Change Detection (iicd): from Earth to Mars

    NASA Astrophysics Data System (ADS)

    Wan, X.; Liu, J.; Qin, M.; Li, S. Y.

    2018-04-01

    Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.

  13. Applications of Fermi-Lowdin-Orbital Self-Interaction Correction Scheme to Organic Systems

    NASA Astrophysics Data System (ADS)

    Baruah, Tunna; Kao, Der-You; Yamamoto, Yoh

    Recent progress in treating the self-interaction errors by means of local, Lowdin-orthogonalized Fermi Orbitals offers a promising route to study the effect of self-interaction errors in the electronic structure of molecules. The Fermi orbitals depend on the location of the electronic positions, called as Fermi orbital descriptors. One advantage of using the Fermi orbitals is that the corrected Hamiltonian is unitarily invariant. Minimization of the corrected energies leads to an optimized set of centroid positions. Here we discuss the applications of this method to various systems from constituent atoms to several medium size molecules such as Mg-porphyrin, C60, pentacene etc. The applications to the ionic systems will also be discussed. De-SC0002168, NSF-DMR 125302.

  14. Resilience of hybrid optical angular momentum qubits to turbulence

    PubMed Central

    Farías, Osvaldo Jiménez; D'Ambrosio, Vincenzo; Taballione, Caterina; Bisesto, Fabrizio; Slussarenko, Sergei; Aolita, Leandro; Marrucci, Lorenzo; Walborn, Stephen P.; Sciarrino, Fabio

    2015-01-01

    Recent schemes to encode quantum information into the total angular momentum of light, defining rotation-invariant hybrid qubits composed of the polarization and orbital angular momentum degrees of freedom, present interesting applications for quantum information technology. However, there remains the question as to how detrimental effects such as random spatial perturbations affect these encodings. Here, we demonstrate that alignment-free quantum communication through a turbulent channel based on hybrid qubits can be achieved with unit transmission fidelity. In our experiment, alignment-free qubits are produced with q-plates and sent through a homemade turbulence chamber. The decoding procedure, also realized with q-plates, relies on both degrees of freedom and renders an intrinsic error-filtering mechanism that maps errors into losses. PMID:25672667

  15. A Swiss cheese error detection method for real-time EPID-based quality assurance and error prevention.

    PubMed

    Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V

    2017-04-01

    To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.

  16. Linear and nonlinear response of a rotating tokamak plasma to a resonant error-field

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, Richard

    2014-09-01

    An in-depth investigation of the effect of a resonant error-field on a rotating, quasi-cylindrical, tokamak plasma is preformed within the context of constant-ψ, resistive-magnetohydrodynamical theory. General expressions for the response of the plasma at the rational surface to the error-field are derived in both the linear and nonlinear regimes, and the extents of these regimes mapped out in parameter space. Torque-balance equations are also obtained in both regimes. These equations are used to determine the steady-state plasma rotation at the rational surface in the presence of the error-field. It is found that, provided the intrinsic plasma rotation is sufficiently large, the torque-balance equations possess dynamically stable low-rotation and high-rotation solution branches, separated by a forbidden band of dynamically unstable solutions. Moreover, bifurcations between the two stable solution branches are triggered as the amplitude of the error-field is varied. A low- to high-rotation bifurcation is invariably associated with a significant reduction in the width of the magnetic island chain driven at the rational surface, and vice versa. General expressions for the bifurcation thresholds are derived and their domains of validity mapped out in parameter space.

  17. SU-F-T-471: Simulated External Beam Delivery Errors Detection with a Large Area Ion Chamber Transmission Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D; Dyer, B; Kumaran Nair, C

    Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less

  18. Deformation Invariant Attribute Vector for Deformable Registration of Longitudinal Brain MR Images

    PubMed Central

    Li, Gang; Guo, Lei; Liu, Tianming

    2009-01-01

    This paper presents a novel approach to define deformation invariant attribute vector (DIAV) for each voxel in 3D brain image for the purpose of anatomic correspondence detection. The DIAV method is validated by using synthesized deformation in 3D brain MRI images. Both theoretic analysis and experimental studies demonstrate that the proposed DIAV is invariant to general nonlinear deformation. Moreover, our experimental results show that the DIAV is able to capture rich anatomic information around the voxels and exhibit strong discriminative ability. The DIAV has been integrated into a deformable registration algorithm for longitudinal brain MR images, and the results on both simulated and real brain images are provided to demonstrate the good performance of the proposed registration algorithm based on matching of DIAVs. PMID:19369031

  19. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  20. On Functional Module Detection in Metabolic Networks

    PubMed Central

    Koch, Ina; Ackermann, Jörg

    2013-01-01

    Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models. PMID:24958145

  1. Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.

    ERIC Educational Resources Information Center

    Sheldon, Deborah A.

    1998-01-01

    Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…

  2. Co-Registration Between Multisource Remote-Sensing Images

    NASA Astrophysics Data System (ADS)

    Wu, J.; Chang, C.; Tsai, H.-Y.; Liu, M.-C.

    2012-07-01

    Image registration is essential for geospatial information systems analysis, which usually involves integrating multitemporal and multispectral datasets from remote optical and radar sensors. An algorithm that deals with feature extraction, keypoint matching, outlier detection and image warping is experimented in this study. The methods currently available in the literature rely on techniques, such as the scale-invariant feature transform, between-edge cost minimization, normalized cross correlation, leasts-quares image matching, random sample consensus, iterated data snooping and thin-plate splines. Their basics are highlighted and encoded into a computer program. The test images are excerpts from digital files created by the multispectral SPOT-5 and Formosat-2 sensors, and by the panchromatic IKONOS and QuickBird sensors. Suburban areas, housing rooftops, the countryside and hilly plantations are studied. The co-registered images are displayed with block subimages in a criss-cross pattern. Besides the imagery, the registration accuracy is expressed by the root mean square error. Toward the end, this paper also includes a few opinions on issues that are believed to hinder a correct correspondence between diverse images.

  3. [When shape-invariant recognition ('A' = 'a') fails. A case study of pure alexia and kinesthetic facilitation].

    PubMed

    Diesfeldt, H F A

    2011-06-01

    A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.

  4. Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators

    DTIC Science & Technology

    2012-09-27

    particular, we require no entangling gates or ancillary systems for the procedure. In contrast with [19], our method is not restricted to processes that are...of states, such as those recently developed for use with permutation-invariant states [60], matrix product states [61] or multi-scale entangled states...process tomography: first prepare the Jamiołkowski state ρE (by adjoining an ancilla, preparing the maximally entangled state |ψ0, and applying E); then

  5. Wideband FM Demodulation and Multirate Frequency Transformations

    DTIC Science & Technology

    2016-12-15

    FM signals. 2.2.1 Adaptive Linear Predictive IF Tracking For a pure FM signal, the IF demodulation approach employing adaptive filters was proposed...desired signal. As summarized in [5], the prediction error filter is given by: E (z) = 1− L∑ l=1 goptl z −l, (8) 2 Approved for public release...assumption and the further assumption that the message signal remains es- sentially invariant over the sampling range of the linear prediction filter , we end

  6. Update on parts SEE suspectibility from heavy ions. [Single Event Effects

    NASA Technical Reports Server (NTRS)

    Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.

    1991-01-01

    JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.

  7. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    PubMed

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  8. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  9. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  10. Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features.

    PubMed

    Khushaba, Rami N; Takruri, Maen; Miro, Jaime Valls; Kodagoda, Sarath

    2014-07-01

    Recent studies in Electromyogram (EMG) pattern recognition reveal a gap between research findings and a viable clinical implementation of myoelectric control strategies. One of the important factors contributing to the limited performance of such controllers in practice is the variation in the limb position associated with normal use as it results in different EMG patterns for the same movements when carried out at different positions. However, the end goal of the myoelectric control scheme is to allow amputees to control their prosthetics in an intuitive and accurate manner regardless of the limb position at which the movement is initiated. In an attempt to reduce the impact of limb position on EMG pattern recognition, this paper proposes a new feature extraction method that extracts a set of power spectrum characteristics directly from the time-domain. The end goal is to form a set of features invariant to limb position. Specifically, the proposed method estimates the spectral moments, spectral sparsity, spectral flux, irregularity factor, and signals power spectrum correlation. This is achieved through using Fourier transform properties to form invariants to amplification, translation and signal scaling, providing an efficient and accurate representation of the underlying EMG activity. Additionally, due to the inherent temporal structure of the EMG signal, the proposed method is applied on the global segments of EMG data as well as the sliced segments using multiple overlapped windows. The performance of the proposed features is tested on EMG data collected from eleven subjects, while implementing eight classes of movements, each at five different limb positions. Practical results indicate that the proposed feature set can achieve significant reduction in classification error rates, in comparison to other methods, with ≈8% error on average across all subjects and limb positions. A real-time implementation and demonstration is also provided and made available as a video supplement (see Appendix A). Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Form Overrides Meaning When Bilinguals Monitor for Errors

    PubMed Central

    Ivanova, Iva; Ferreira, Victor S.; Gollan, Tamar H.

    2016-01-01

    Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations. PMID:28649169

  12. Confirmatory Factor Analysis of Sizing Me Up: Validation of an Obesity-Specific Health-Related Quality of Life Measure in Latino Youth.

    PubMed

    Tripicchio, Gina L; Borner, Kelsey B; Odar Stough, Cathleen; Poppert Cordts, Katrina; Dreyer Gillette, Meredith; Davis, Ann M

    2017-05-01

    This study aims to validate an obesity-specific health-related quality of life (HRQOL) measure, Sizing Me Up (SMU), in treatment-seeking Latino youth. Pediatric obesity has been associated with reduced HRQOL; therefore, valid measures are important for use in diverse populations that may be at increased risk for obesity and related comorbidities. Structural equation modeling tested the fit of the 5-subscale, 22-item SMU measure in Latino youth, 5-13 years of age, with obesity ( N = 204). Invariance testing was conducted to examine equivalence between Latino and non-Latino groups ( N = 250). SMU achieved acceptable fit in a Latino population [χ 2 = 428.33, df = 199, p < .001, Root Mean Squared Error of Approximation = 0.072 (0.062-0.082), Comparative Fit Index = 0.915, Tucker-Lewis Index = 0.901, Weighted Root Mean Square Residual = 1.2230]. Additionally, factor structure and factor loadings were invariant across Latino and non-Latino groups, but thresholds were not invariant. SMU is a valid measure of obesity-specific HRQOL in treatment-seeking Latino youth with obesity. © The Author 2016. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  13. Computer simulation of the mathematical modeling involved in constitutive equation development: Via symbolic computations

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Tan, H. Q.; Dong, X.

    1989-01-01

    Development of new material models for describing the high temperature constitutive behavior of real materials represents an important area of research in engineering disciplines. Derivation of mathematical expressions (constitutive equations) which describe this high temperature material behavior can be quite time consuming, involved and error prone; thus intelligent application of symbolic systems to facilitate this tedious process can be of significant benefit. A computerized procedure (SDICE) capable of efficiently deriving potential based constitutive models, in analytical form is presented. This package, running under MACSYMA, has the following features: partial differentiation, tensor computations, automatic grouping and labeling of common factors, expression substitution and simplification, back substitution of invariant and tensorial relations and a relational data base. Also limited aspects of invariant theory were incorporated into SDICE due to the utilization of potentials as a starting point and the desire for these potentials to be frame invariant (objective). Finally not only calculation of flow and/or evolutionary laws were accomplished but also the determination of history independent nonphysical coefficients in terms of physically measurable parameters, e.g., Young's modulus, was achieved. The uniqueness of SDICE resides in its ability to manipulate expressions in a general yet predefined order and simplify expressions so as to limit expression growth. Results are displayed when applicable utilizing index notation.

  14. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  15. Factorial Validity and Invariance Assessment of a Short Version of the Recalled Childhood Gender Identity/Role Questionnaire.

    PubMed

    Veale, Jaimie F

    2016-04-01

    Recalled childhood gender role/identity is a construct that is related to sexual orientation, abuse, and psychological health. The purpose of this study was to assess the factorial validity of a short version of Zucker et al.'s (2006) "Recalled Childhood Gender Identity/Gender Role Questionnaire" using confirmatory factor analysis and to test the stability of the factor structure across groups (measurement invariance). Six items of the questionnaire were completed online by 1929 participants from a variety of gender identity and sexual orientation groups. Models of the six items loading onto one factor had poor fit for the data. Items were removed for having a large proportion of error variance. Among birth-assigned females, a five-item model had good fit for the data, but there was evidence for differences in scale's factor structure across gender identity, age, level of education, and country groups. Among birth-assigned males, the resulting four-item model did not account for all of the relationship between variables, and modeling for this resulted in a model that was almost saturated. This model also had evidence of measurement variance across gender identity and sexual orientation groups. The models had good reliability and factor score determinacy. These findings suggest that results of previous studies that have assessed recalled childhood gender role/identity may have been susceptible to construct bias due to measurement variance across these groups. Future studies should assess measurement invariance between groups they are comparing, and if it is not found the issue can be addressed by removing variant indicators and/or applying a partial invariance model.

  16. Factorial invariance of child self-report across age subgroups: a confirmatory factor analysis of ages 5 to 16 years utilizing the PedsQL 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-01-01

    The utilization of health-related quality of life (HRQOL) measurement in an effort to improve pediatric health and well-being and determine the value of health care services has grown dramatically over the past decade. The paradigm shift toward patient-reported outcomes (PROs) in clinical trials has provided the opportunity to emphasize the value and essential need for pediatric patient self-report. In order for HRQOL/PRO comparisons to be meaningful for subgroup analyses, it is essential to demonstrate factorial invariance. This study examined age subgroup factorial invariance of child self-report for ages 5 to 16 years on more than 8,500 children utilizing the PedsQL 4.0 Generic Core Scales. Multigroup Confirmatory Factor Analysis (MGCFA) was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed to compare the factor loadings across the age subgroups. Metric invariance (i.e., equal factor loadings) across the age subgroups was demonstrated based on stability of the Comparative Fit Index between the two models, and several additional indices of practical fit including the Root Mean Squared Error of Approximation, the Non-Normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure across the age subgroups. Based on these data, it can be concluded that children across the age subgroups in this study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner regardless of their age.

  17. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  18. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  19. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  20. Single Versus Multiple Events Error Potential Detection in a BCI-Controlled Car Game With Continuous and Discrete Feedback.

    PubMed

    Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R

    2016-03-01

    This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.

  1. Groups of adjacent contour segments for object detection.

    PubMed

    Ferrari, V; Fevrier, L; Jurie, F; Schmid, C

    2008-01-01

    We present a family of scale-invariant local shape features formed by chains of k connected, roughly straight contour segments (kAS), and their use for object class detection. kAS are able to cleanly encode pure fragments of an object boundary, without including nearby clutter. Moreover, they offer an attractive compromise between information content and repeatability, and encompass a wide variety of local shape structures. We also define a translation and scale invariant descriptor encoding the geometric configuration of the segments within a kAS, making kAS easy to reuse in other frameworks, for example as a replacement or addition to interest points. Software for detecting and describing kAS is released on lear.inrialpes.fr/software. We demonstrate the high performance of kAS within a simple but powerful sliding-window object detection scheme. Through extensive evaluations, involving eight diverse object classes and more than 1400 images, we 1) study the evolution of performance as the degree of feature complexity k varies and determine the best degree; 2) show that kAS substantially outperform interest points for detecting shape-based classes; 3) compare our object detector to the recent, state-of-the-art system by Dalal and Triggs [4].

  2. Convolution Comparison Pattern: An Efficient Local Image Descriptor for Fingerprint Liveness Detection

    PubMed Central

    Gottschlich, Carsten

    2016-01-01

    We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  4. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  5. Detection of invariant natural killer T cells in ejaculates from infertile patients with chronic inflammation of genital tract.

    PubMed

    Duan, Yong-Gang; Chen, Shujian; Haidl, Gerhard; Allam, Jean-Pierre

    2017-08-01

    Chronic inflammation of genital tract is thought to play a major role in male fertility disorder. Natural killer (NK) T cells are a heterogeneous group of T cells that share properties of both T cells and NK cells which display immunoregulatory properties. However, little is known regarding the presence and function of NK T cells in ejaculates from patients with chronic inflammation of genital tract. Invariant NK T (iNK T) cells were detected by invariant (Vα24-JαQ) TCR chain in ejaculates from patients suffering from chronic inflammation of genital tract (CIGT) using flow cytometry and immunofluorescence of double staining (n=40). Inflammatory cytokines interleukin (IL)-6, IL-17, and IFN-γ were detected in cell-free seminal plasma using an enzyme-linked immunosorbent assay (ELISA). The correlation between the percentage of iNK T cells and spermatozoa count, motility, vitality, seminal IL-6, IL-17, and IFN-γ was investigated. Significant percentages of iNK T cells above 10% were detected in 50% (CIGT-NKT + group). A negative correlation was detected between the percentage of iNK T cells and spermatozoa count (r=-.5957, P=.0056), motility (r=-.6163, P=.0038), and vitality (r=-.8032, P=.0019) in CIGT-NKT + group (n=20). Interestingly, a significant correlation of iNK T cells to seminal IL-6 (r=.7083, P=.0005), IFN-γ (r=.9578, P<.0001) was detected whereas lack of correlation between iNK T cells and IL-17 (r=-.1557, P=.5122) in CIGT-NKT + group. The proliferative response of iNK T cells could accompany an inflammatory response to spermatozoa and consequently influence sperm quality through secretion of IFN-γ but not IL-17 under chronic inflammatory condition. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Pathological brain detection based on wavelet entropy and Hu moment invariants.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sun, Ping; Phillips, Preetha

    2015-01-01

    With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The 10 runs of k-fold stratified cross validation result showed that the proposed "WE + HMI + GEPSVM + RBF" method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.

  7. Computation of Quasiperiodic Normally Hyperbolic Invariant Tori: Rigorous Results

    NASA Astrophysics Data System (ADS)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    The development of efficient methods for detecting quasiperiodic oscillations and computing the corresponding invariant tori is a subject of great importance in dynamical systems and their applications in science and engineering. In this paper, we prove the convergence of a new Newton-like method for computing quasiperiodic normally hyperbolic invariant tori carrying quasiperiodic motion in smooth families of real-analytic dynamical systems. The main result is stated as an a posteriori KAM-like theorem that allows controlling the inner dynamics on the torus with appropriate detuning parameters, in order to obtain a prescribed quasiperiodic motion. The Newton-like method leads to several fast and efficient computational algorithms, which are discussed and tested in a companion paper (Canadell and Haro in J Nonlinear Sci, 2017. doi: 10.1007/s00332-017-9388-z), in which new mechanisms of breakdown are presented.

  8. Identification of black hole horizons using scalar curvature invariants

    NASA Astrophysics Data System (ADS)

    Coley, Alan; McNutt, David

    2018-01-01

    We introduce the concept of a geometric horizon, which is a surface distinguished by the vanishing of certain curvature invariants which characterize its special algebraic character. We motivate its use for the detection of the event horizon of a stationary black hole by providing a set of appropriate scalar polynomial curvature invariants that vanish on this surface. We extend this result by proving that a non-expanding horizon, which generalizes a Killing horizon, coincides with the geometric horizon. Finally, we consider the imploding spherically symmetric metrics and show that the geometric horizon identifies a unique quasi-local surface corresponding to the unique spherically symmetric marginally trapped tube, implying that the spherically symmetric dynamical black holes admit a geometric horizon. Based on these results, we propose a suite of conjectures concerning the application of geometric horizons to more general dynamical black hole scenarios.

  9. Measurement Invariance of the Internet Addiction Test Among Hong Kong, Japanese, and Malaysian Adolescents.

    PubMed

    Lai, Ching-Man; Mak, Kwok-Kei; Cheng, Cecilia; Watanabe, Hiroko; Nomachi, Shinobu; Bahar, Norharlina; Young, Kimberly S; Ko, Huei-Chen; Kim, Dongil; Griffiths, Mark D

    2015-10-01

    There has been increased research examining the psychometric properties on the Internet Addiction Test (IAT) in different populations. This population-based study examined the psychometric properties and measurement invariance of the IAT in adolescents from three Asian countries. In the Asian Adolescent Risk Behavior Survey (AARBS), 2,535 secondary school students (55.9% girls) aged 12-18 years from Hong Kong (n=844), Japan (n=744), and Malaysia (n=947) completed a survey in 2012-2013 school year. A nested hierarchy of hypotheses concerning the IAT cross-country invariance was tested using multigroup confirmatory factor analyses. Replicating past findings in Hong Kong adolescents, the construct of the IAT is best represented by a second-order three-factor structure in Malaysian and Japanese adolescents. Configural, metric, scalar, and partial strict factorial invariance was established across the three samples. No cross-country differences on Internet addiction were detected at the latent mean level. This study provided empirical support for the IAT as a reliable and factorially stable instrument, and valid to be used across Asian adolescent populations.

  10. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Unmodeled observation error induces bias when inferring patterns and dynamics of species occurrence via aural detections

    USGS Publications Warehouse

    McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.

  12. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  13. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  14. Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2012-09-27

    we require no entangling gates or ancillary systems for the procedure. In contrast with [19], our method is not restricted to processes that are...states, such as those recently developed for use with permutation-invariant states [60], matrix product states [61] or multi-scale entangled states [62...by adjoining an ancilla, preparing the maximally entangled state |ψ0〉, and applying E); then do compressed quantum state tomography on ρE ; see

  15. Asymptotic stability estimates near an equilibrium point

    NASA Astrophysics Data System (ADS)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  16. Invariant Tori in the Secular Motions of the Three-body Planetary Systems

    NASA Astrophysics Data System (ADS)

    Locatelli, Ugo; Giorgilli, Antonio

    We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.

  17. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  18. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Fujiwara, T.; Lin, S.

    1986-01-01

    In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.

  19. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  20. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  1. Neural evidence for description dependent reward processing in the framing effect.

    PubMed

    Yu, Rongjun; Zhang, Ping

    2014-01-01

    Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the "worse than expected" negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to "better than expected" positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect.

  2. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  3. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  4. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  5. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  6. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  7. SU-E-T-310: Targeting Safety Improvements Through Analysis of Near-Miss Error Detection Points in An Incident Learning Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, A; Nyflot, M; Sponseller, P

    2014-06-01

    Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less

  8. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  9. Principal visual word discovery for automatic license plate detection.

    PubMed

    Zhou, Wengang; Li, Houqiang; Lu, Yijuan; Tian, Qi

    2012-09-01

    License plates detection is widely considered a solved problem, with many systems already in operation. However, the existing algorithms or systems work well only under some controlled conditions. There are still many challenges for license plate detection in an open environment, such as various observation angles, background clutter, scale changes, multiple plates, uneven illumination, and so on. In this paper, we propose a novel scheme to automatically locate license plates by principal visual word (PVW), discovery and local feature matching. Observing that characters in different license plates are duplicates of each other, we bring in the idea of using the bag-of-words (BoW) model popularly applied in partial-duplicate image search. Unlike the classic BoW model, for each plate character, we automatically discover the PVW characterized with geometric context. Given a new image, the license plates are extracted by matching local features with PVW. Besides license plate detection, our approach can also be extended to the detection of logos and trademarks. Due to the invariance virtue of scale-invariant feature transform feature, our method can adaptively deal with various changes in the license plates, such as rotation, scaling, illumination, etc. Promising results of the proposed approach are demonstrated with an experimental study in license plate detection.

  10. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  11. An intergrated image matching algorithm and its application in the production of lunar map based on Chang'E-2 images

    NASA Astrophysics Data System (ADS)

    Wang, F.; Ren, X.; Liu, J.; Li, C.

    2012-12-01

    An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.

  12. Fully invariant wavelet enhanced minimum average correlation energy filter for object recognition in cluttered and occluded environments

    NASA Astrophysics Data System (ADS)

    Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.

    2017-05-01

    A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.

  13. Subtype Coastline Determination in Urban Coast Based on Multiscale Features: a Case Study in Tianjin, China

    NASA Astrophysics Data System (ADS)

    Song, Y.; Ai, Y.; Zhu, H.

    2018-04-01

    In urban coast, coastline is a direct factor to reflect human activities. It is of crucial importance to the understanding of urban growth, resource development and ecological environment. Due to complexity and uncertainty in this type of coast, it is difficult to detect accurate coastline position and determine the subtypes of the coastline. In this paper, we present a multiscale feature-based subtype coastline determination (MFBSCD) method to extract coastline and determine the subtypes. In this method, uncertainty-considering coastline detection (UCCD) method is proposed to separate water and land for more accurate coastline position. The MFBSCD method can well integrate scale-invariant features of coastline in geometry and spatial structure to determine coastline in subtype scale, and can make subtypes verify with each other during processing to ensure the accuracy of final results. It was applied to Landsat Thematic Mapper (TM) and Operational Land Imager (OLI) images of Tianjin, China, and the accuracy of the extracted coastlines was assessed with the manually delineated coastline. The mean ME (misclassification error) and mean LM (Line Matching) are 0.0012 and 24.54 m respectively. The method provides an inexpensive and automated means of coastline mapping with subtype scale in coastal city sectors with intense human interference, which can be significant for coast resource management and evaluation of urban development.

  14. An advanced SEU tolerant latch based on error detection

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  15. [Detection and classification of medication errors at Joan XXIII University Hospital].

    PubMed

    Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J

    2004-01-01

    Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.

  16. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  17. Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.

    DTIC Science & Technology

    1987-07-01

    detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication

  18. Improvement of the Error-detection Mechanism in Adults with Dyslexia Following Reading Acceleration Training.

    PubMed

    Horowitz-Kraus, Tzipi

    2016-05-01

    The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  20. Linear and Nonlinear Response of a Rotating Tokamak Plasma to a Resonant Error-Field

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, Richard

    2014-10-01

    An in-depth investigation of the effect of a resonant error-field on a rotating, quasi-cylindrical, tokamak plasma is preformed within the context of resistive-MHD theory. General expressions for the response of the plasma at the rational surface to the error-field are derived in both the linear and nonlinear regimes, and the extents of these regimes mapped out in parameter space. Torque-balance equations are also obtained in both regimes. These equations are used to determine the steady-state plasma rotation at the rational surface in the presence of the error-field. It is found that, provided the intrinsic plasma rotation is sufficiently large, the torque-balance equations possess dynamically stable low-rotation and high-rotation solution branches, separated by a forbidden band of dynamically unstable solutions. Moreover, bifurcations between the two stable solution branches are triggered as the amplitude of the error-field is varied. A low- to high-rotation bifurcation is invariably associated with a significant reduction in the width of the magnetic island chain driven at the rational surface, and vice versa. General expressions for the bifurcation thresholds are derived, and their domains of validity mapped out in parameter space. This research was funded by the U.S. Department of Energy under Contract DE-FG02-04ER-54742.

  1. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  2. Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.

    PubMed

    Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J

    2018-01-01

    Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.

  3. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  4. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  5. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  6. Diffraction analysis and evaluation of several focus- and track-error detection schemes for magneto-optical disk systems

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.; Mansuripur, M.

    1992-01-01

    A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).

  7. Error detection and correction unit with built-in self-test capability for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin

    1990-01-01

    The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.

  8. Array invariant-based ranging of a source of opportunity.

    PubMed

    Byun, Gihoon; Kim, J S; Cho, Chomgun; Song, H C; Byun, Sung-Hoon

    2017-09-01

    The feasibility of tracking a ship radiating random and anisotropic noise is investigated using ray-based blind deconvolution (RBD) and array invariant (AI) with a vertical array in shallow water. This work is motivated by a recent report [Byun, Verlinden, and Sabra, J. Acoust. Soc. Am. 141, 797-807 (2017)] that RBD can be applied to ships of opportunity to estimate the Green's function. Subsequently, the AI developed for robust source-range estimation in shallow water can be applied to the estimated Green's function via RBD, exploiting multipath arrivals separated in beam angle and travel time. In this letter, a combination of the RBD and AI is demonstrated to localize and track a ship of opportunity (200-900 Hz) to within a 5% standard deviation of the relative range error along a track at ranges of 1.8-3.4 km, using a 16-element, 56-m long vertical array in approximately 100-m deep shallow water.

  9. Fast spacecraft adaptive attitude tracking control through immersion and invariance design

    NASA Astrophysics Data System (ADS)

    Wen, Haowei; Yue, Xiaokui; Li, Peng; Yuan, Jianping

    2017-10-01

    This paper presents a novel non-certainty-equivalence adaptive control method for the attitude tracking control problem of spacecraft with inertia uncertainties. The proposed immersion and invariance (I&I) based adaptation law provides a more direct and flexible approach to circumvent the limitations of the basic I&I method without employing any filter signal. By virtue of the adaptation high-gain equivalence property derived from the proposed adaptive method, the closed-loop adaptive system with a low adaptation gain could recover the high adaptation gain performance of the filter-based I&I method, and the resulting control torque demands during the initial transient has been significantly reduced. A special feature of this method is that the convergence of the parameter estimation error has been observably improved by utilizing an adaptation gain matrix instead of a single adaptation gain value. Numerical simulations are presented to highlight the various benefits of the proposed method compared with the certainty-equivalence-based control method and filter-based I&I control schemes.

  10. Evidence for B{yields}K{eta}'{gamma} decays at Belle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wedd, R.; Barberio, E.; Limosani, A.

    2010-06-01

    We present the results of a search for the radiative decay B{yields}K{eta}{sup '{gamma}} and find evidence for B{sup +{yields}}K{sup +{eta}'{gamma}} decays at the 3.3 standard deviation level with a partial branching fraction of (3.6{+-}1.2{+-}0.4)x10{sup -6}, where the first error is statistical and the second systematic. This measurement is restricted to the region of combined K{eta}{sup '} invariant mass less than 3.4 GeV/c{sup 2}. A 90% confidence level upper limit of 6.4x10{sup -6} is obtained for the partial branching fraction of the decay B{sup 0{yields}}K{sup 0{eta}'{gamma}} in the same K{eta}{sup '} invariant mass region. These results are obtained from a 605more » fb{sup -1} data sample containing 657x10{sup 6}BB pairs collected at the {Upsilon}(4S) resonance with the Belle detector at the KEKB asymmetric-energy e{sup +}e{sup -} collider.« less

  11. Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'

    NASA Astrophysics Data System (ADS)

    Ottersten, B. E.; Viberg, M.; Kailath, T.

    1989-11-01

    This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.

  12. Hamiltonian approach to second order gauge invariant cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Domènech, Guillem; Sasaki, Misao

    2018-01-01

    In view of growing interest in tensor modes and their possible detection, we clarify the definition of tensor modes up to 2nd order in perturbation theory within the Hamiltonian formalism. Like in gauge theory, in cosmology the Hamiltonian is a suitable and consistent approach to reduce the gauge degrees of freedom. In this paper we employ the Faddeev-Jackiw method of Hamiltonian reduction. An appropriate set of gauge invariant variables that describe the dynamical degrees of freedom may be obtained by suitable canonical transformations in the phase space. We derive a set of gauge invariant variables up to 2nd order in perturbation expansion and for the first time we reduce the 3rd order action without adding gauge fixing terms. In particular, we are able to show the relation between the uniform-ϕ and Newtonian slicings, and study the difference in the definition of tensor modes in these two slicings.

  13. Online gesture spotting from visual hull data.

    PubMed

    Peng, Bo; Qian, Gang

    2011-06-01

    This paper presents a robust framework for online full-body gesture spotting from visual hull data. Using view-invariant pose features as observations, hidden Markov models (HMMs) are trained for gesture spotting from continuous movement data streams. Two major contributions of this paper are 1) view-invariant pose feature extraction from visual hulls, and 2) a systematic approach to automatically detecting and modeling specific nongesture movement patterns and using their HMMs for outlier rejection in gesture spotting. The experimental results have shown the view-invariance property of the proposed pose features for both training poses and new poses unseen in training, as well as the efficacy of using specific nongesture models for outlier rejection. Using the IXMAS gesture data set, the proposed framework has been extensively tested and the gesture spotting results are superior to those reported on the same data set obtained using existing state-of-the-art gesture spotting methods.

  14. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy.

    PubMed

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (∼0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (∼0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (∼10^{2}), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  15. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy

    NASA Astrophysics Data System (ADS)

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  16. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    ERIC Educational Resources Information Center

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad

    2010-01-01

    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  17. Stellar interferometers and hypertelescopes: new insights on an angular spatial frequency approach to their non-invariant imaging

    NASA Astrophysics Data System (ADS)

    Dettwiller, L.; Lépine, T.

    2017-12-01

    A general and pure wave theory of image formation for all types of stellar interferometers, including hypertelescopes, is developed in the frame of Fresnel's paraxial approximations of diffraction. For a hypertelescope, we show that the severe lack of translation invariance leads to multiple and strong spatial frequency heterodyning, which codes the very high frequencies detected by the hypertelescope into medium spatial frequencies and introduces a moiré-type ambiguity for extended objects. This explains mathematically the disappointing appearance of poor resolution observed in some image simulations for hypertelescopes.

  18. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  19. A Fluorescence Correlation Spectroscopy Study of the Cryoprotective Mechanism of Glucose on Hemocyanin

    NASA Astrophysics Data System (ADS)

    Hauger, Eric J.

    Cryopreservation is the method of preserving biomaterials by cooling and storing them at very low temperatures. In order to prevent the damaging effects of cooling, cryoprotectants are used to inhibit ice formation. Common cryoprotectants used today include ethylene glycol, propylene glycol, dimethyl sulfoxide, glycerol, and sugars. However, the mechanism responsible for the effectiveness of these cryoprotectants is poorly understood on the molecular level. The water replacement model predicts that water molecules around the surfaces of proteins are replaced with sugar molecules, forming a protective layer against the denaturing ice formation. Under this scheme, one would expect an increase in the hydrodynamic radius with increasing sugar concentration. In order to test this hypothesis, two-photon fluorescence correlation spectroscopy (FCS) was used to measure the hydrodynamic radius of hemocyanin (Hc), an oxygen-carrying protein found in arthropods, in glucose solutions up to 20wt%. FCS found that the hydrodynamic radius was invariant with increasing glucose concentration. Dynamic light scattering (DLS) results verified the hydrodynamic radius of hemocyanin in the absence of glucose. Although this invariant trend seems to indicate that the water replacement hypothesis is invalid the expected glucose layer around the Hc is smaller than the error in the hydrodynamic radius measurements for FCS. The expected change in the hydrodynamic radius with an additional layer of glucose is 1nm, however, the FCS standard error is +/-3.61nm. Therefore, the water replacement model cannot be confirmed nor refuted as a possible explanation for the cryoprotective effects of glucose on Hc.

  20. Geometrically robust image watermarking by sector-shaped partitioning of geometric-invariant regions.

    PubMed

    Tian, Huawei; Zhao, Yao; Ni, Rongrong; Cao, Gang

    2009-11-23

    In a feature-based geometrically robust watermarking system, it is a challenging task to detect geometric-invariant regions (GIRs) which can survive a broad range of image processing operations. Instead of commonly used Harris detector or Mexican hat wavelet method, a more robust corner detector named multi-scale curvature product (MSCP) is adopted to extract salient features in this paper. Based on such features, disk-like GIRs are found, which consists of three steps. First, robust edge contours are extracted. Then, MSCP is utilized to detect the centers for GIRs. Third, the characteristic scale selection is performed to calculate the radius of each GIR. A novel sector-shaped partitioning method for the GIRs is designed, which can divide a GIR into several sector discs with the help of the most important corner (MIC). The watermark message is then embedded bit by bit in each sector by using Quantization Index Modulation (QIM). The GIRs and the divided sector discs are invariant to geometric transforms, so the watermarking method inherently has high robustness against geometric attacks. Experimental results show that the scheme has a better robustness against various image processing operations including common processing attacks, affine transforms, cropping, and random bending attack (RBA) than the previous approaches.

  1. A comprehensive profile of DNA copy number variations in a Korean population: identification of copy number invariant regions among Koreans.

    PubMed

    Jeon, Jae Pil; Shim, Sung Mi; Jung, Jong Sun; Nam, Hye Young; Lee, Hye Jin; Oh, Berm Seok; Kim, Kuchan; Kim, Hyung Lae; Han, Bok Ghee

    2009-09-30

    To examine copy number variations among the Korean population, we compared individual genomes with the Korean reference genome assembly using the publicly available Korean HapMap SNP 50 k chip data from 90 individuals. Korean individuals exhibited 123 copy number variation regions (CNVRs) covering 27.2 mb, equivalent to 1.0% of the genome in the copy number variation (CNV) analysis using the combined criteria of P value (P<0.01) and standard deviation of copy numbers (SD>or= 0.25) among study subjects. In contrast, when compared to the Affymetrix reference genome assembly from multiple ethnic groups, considerably more CNVRs (n=643) were detected in larger proportions (5.0%) of the genome covering 135.1 mb even by more stringent criteria (P<0.001 and SD>or=0.25), reflecting ethnic diversity of structural variations between Korean and other populations. Some CNVRs were validated by the quantitative multiplex PCR of short fluorescent fragment (QMPSF) method, and then copy number invariant regions were detected among the study subjects. These copy number invariant regions would be used as good internal controls for further CNV studies. Lastly, we demonstrated that the CNV information could stratify even a single ethnic population with a proper reference genome assembly from multiple heterogeneous populations.

  2. Error-Analysis for Correctness, Effectiveness, and Composing Procedure.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild

    The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…

  3. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-01-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  4. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  5. Psychometric validation of the Persian Bergen Social Media Addiction Scale using classic test theory and Rasch models.

    PubMed

    Lin, Chung-Ying; Broström, Anders; Nilsen, Per; Griffiths, Mark D; Pakpour, Amir H

    2017-12-01

    Background and aims The Bergen Social Media Addiction Scale (BSMAS), a six-item self-report scale that is a brief and effective psychometric instrument for assessing at-risk social media addiction on the Internet. However, its psychometric properties in Persian have never been examined and no studies have applied Rasch analysis for the psychometric testing. This study aimed to verify the construct validity of the Persian BSMAS using confirmatory factor analysis (CFA) and Rasch models among 2,676 Iranian adolescents. Methods In addition to construct validity, measurement invariance in CFA and differential item functioning (DIF) in Rasch analysis across gender were tested for in the Persian BSMAS. Results Both CFA [comparative fit index (CFI) = 0.993; Tucker-Lewis index (TLI) = 0.989; root mean square error of approximation (RMSEA) = 0.057; standardized root mean square residual (SRMR) = 0.039] and Rasch (infit MnSq = 0.88-1.28; outfit MnSq = 0.86-1.22) confirmed the unidimensionality of the BSMAS. Moreover, measurement invariance was supported in multigroup CFA including metric invariance (ΔCFI = -0.001; ΔSRMR = 0.003; ΔRMSEA = -0.005) and scalar invariance (ΔCFI = -0.002; ΔSRMR = 0.005; ΔRMSEA = 0.001) across gender. No item displayed DIF (DIF contrast = -0.48 to 0.24) in Rasch across gender. Conclusions Given the Persian BSMAS was unidimensional, it is concluded that the instrument can be used to assess how an adolescent is addicted to social media on the Internet. Moreover, users of the instrument may comfortably compare the sum scores of the BSMAS across gender.

  6. The Eating Motivation Survey: results from the USA, India and Germany.

    PubMed

    Sproesser, Gudrun; Ruby, Matthew B; Arbit, Naomi; Rozin, Paul; Schupp, Harald T; Renner, Britta

    2018-02-01

    Research has shown that there is a large variety of different motives underlying why people eat what they eat, which can be assessed with The Eating Motivation Survey (TEMS). The present study investigates the consistency and measurement invariance of the fifteen basic motives included in TEMS in countries with greatly differing eating environments. The fifteen-factor structure of TEMS (brief version: forty-six items) was tested in confirmatory factor analyses. An online survey was conducted. US-American, Indian and German adults (total N 749) took part. Despite the complexity of the model, fit indices indicated a reasonable model fit (for the total sample: χ 2/df=4·03; standardized root-mean-squared residual (SRMR)=0·063; root-mean-square error of approximation (RMSEA)=0·064 (95 % CI 0·062, 0·066)). Only the comparative fit index (CFI) was below the recommended threshold (for the total sample: CFI=0·84). Altogether, 181 out of 184 item loadings were above the recommended threshold of 0·30. Furthermore, the factorial structure of TEMS was invariant across countries with respect to factor configuration and factor loadings (configural v. metric invariance model: ΔCFI=0·009; ΔRMSEA=0·001; ΔSRMR=0·001). Moreover, forty-three out of forty-six items showed invariant intercepts across countries. The fifteen-factor structure of TEMS was, in general, confirmed across countries despite marked differences in eating environments. Moreover, latent means of fourteen out of fifteen motive factors can be compared across countries in future studies. This is a first step towards determining generalizability of the fifteen basic eating motives of TEMS across eating environments.

  7. A study of hyperelastic models for predicting the mechanical behavior of extensor apparatus.

    PubMed

    Elyasi, Nahid; Taheri, Kimia Karimi; Narooei, Keivan; Taheri, Ali Karimi

    2017-06-01

    In this research, the nonlinear elastic behavior of human extensor apparatus was investigated. To this goal, firstly the best material parameters of hyperelastic strain energy density functions consisting of the Mooney-Rivlin, Ogden, invariants, and general exponential models were derived for the simple tension experimental data. Due to the significance of stress response in other deformation modes of nonlinear models, the calculated parameters were used to study the pure shear and balance biaxial tension behavior of the extensor apparatus. The results indicated that the Mooney-Rivlin model predicts an unstable behavior in the balance biaxial deformation of the extensor apparatus, while the Ogden order 1 represents a stable behavior, although the fitting of experimental data and theoretical model was not satisfactory. However, the Ogden order 6 model was unstable in the simple tension mode and the Ogden order 5 and general exponential models presented accurate and stable results. In order to reduce the material parameters, the invariants model with four material parameters was investigated and this model presented the minimum error and stable behavior in all deformation modes. The ABAQUS Explicit solver was coupled with the VUMAT subroutine code of the invariants model to simulate the mechanical behavior of the central and terminal slips of the extensor apparatus during the passive finger flexion, which is important in the prediction of boutonniere deformity and chronic mallet finger injuries, respectively. Also, to evaluate the adequacy of constitutive models in simulations, the results of the Ogden order 5 were presented. The difference between the predictions was attributed to the better fittings of the invariants model compared with the Ogden model.

  8. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, E; Phillips, M; Bojechko, C

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less

  9. Multi-class geospatial object detection and geographic image classification based on collection of part detectors

    NASA Astrophysics Data System (ADS)

    Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei

    2014-12-01

    The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.

  10. Phase-Space Detection of Cyber Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Jimenez, Jarilyn M; Ferber, Aaron E; Prowell, Stacy J

    Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The datamore » analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.« less

  11. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-19

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  12. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  13. Factorial invariance of child self-report across healthy and chronic health condition groups: a confirmatory factor analysis utilizing the PedsQLTM 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-07-01

    The objective of the present study was to examine the factorial invariance of the PedsQL 4.0 Generic Core Scales for child self-report across 11,433 children ages 5-18 with chronic health conditions and healthy children. Multigroup Confirmatory Factor Analysis was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed in order to compare the factor loadings across children with chronic health conditions and healthy children. Metric invariance (i.e., equal factor loadings) was demonstrated based on stability of the Comparative Fit Index (CFI) between the two models, and several additional indices of practical fit including the root mean squared error of approximation, the Non-normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure on the PedsQL 4.0 Generic Core Scales across healthy and chronic health condition groups. These findings suggest that when differences are found across chronic health condition and healthy groups when utilizing the PedsQL, these differences are more likely real differences in self-perceived health-related quality of life, rather than differences in interpretation of the PedsQL items as a function of health status.

  14. Neural evidence for description dependent reward processing in the framing effect

    PubMed Central

    Yu, Rongjun; Zhang, Ping

    2014-01-01

    Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the “worse than expected” negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to “better than expected” positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect. PMID:24733998

  15. Are extreme events (statistically) special? (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A. F.; McCloskey, J.

    2009-12-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic’, do they ‘know’ how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic’-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball’ fits to unconsciously (but wrongly in this case) assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.

  16. Latent error detection: A golden two hours for detection.

    PubMed

    Saward, Justin R E; Stanton, Neville A

    2017-03-01

    Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. Error Detection/Correction in Collaborative Writing

    ERIC Educational Resources Information Center

    Pilotti, Maura; Chodorow, Martin

    2009-01-01

    In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…

  18. A Corpus-Based System of Error Detection and Revision Suggestion for Spanish Learners in Taiwan: A Case Study

    ERIC Educational Resources Information Center

    Lu, Hui-Chuan; Chu, Yu-Hsin; Chang, Cheng-Yu

    2013-01-01

    Compared with English learners, Spanish learners have fewer resources for automatic error detection and revision and following the current integrative Computer Assisted Language Learning (CALL), we combined corpus-based approach and CALL to create the System of Error Detection and Revision Suggestion (SEDRS) for learning Spanish. Through…

  19. Computer-Assisted Detection of 90% of EFL Student Errors

    ERIC Educational Resources Information Center

    Harvey-Scholes, Calum

    2018-01-01

    Software can facilitate English as a Foreign Language (EFL) students' self-correction of their free-form writing by detecting errors; this article examines the proportion of errors which software can detect. A corpus of 13,644 words of written English was created, comprising 90 compositions written by Spanish-speaking students at levels A2-B2…

  20. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  1. Statistical characterisation of COSMO Sky-Med X-SAR retrieved precipitation fields by scale-invariance analysis

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Mascaro, Giuseppe; Hellies, Matteo; Baldini, Luca; Roberto, Nicoletta

    2013-04-01

    COSMO Sky-Med (CSK) is an important programme of the Italian Space Agency aiming at supporting environmental monitoring and management of exogenous, endogenous and anthropogenic risks through X-band Synthetic Aperture Radar (X-SAR) on board of 4 satellites forming a constellation. Most of typical SAR applications are focused on land or ocean observation. However, X-band SAR can be detect precipitation that results in a specific signature caused by the combination of attenuation of surface returns induced by precipitation and enhancement of backscattering determined by the hydrometeors in the SAR resolution volume. Within CSK programme, we conducted an intercomparison between the statistical properties of precipitation fields derived by CSK SARs and those derived by the CNR Polar 55C (C-band) ground based weather radar located in Rome (Italy). This contribution presents main results of this research which was aimed at the robust characterisation of rainfall statistical properties across different scales by means of scale-invariance analysis and multifractal theory. The analysis was performed on a dataset of more two years of precipitation observations collected by the CNR Polar 55C radar and rainfall fields derived from available images collected by the CSK satellites during intense rainfall events. Scale-invariance laws and multifractal properties were detected on the most intense rainfall events derived from the CNR Polar 55C radar for spatial scales from 4 km to 64 km. The analysis on X-SAR retrieved rainfall fields, although based on few images, leaded to similar results and confirmed the existence of scale-invariance and multifractal properties for scales larger than 4 km. These outcomes encourage investigating SAR methodologies for future development of meteo-hydrological forecasting models based on multifractal theory.

  2. Crista egregia: a geometrical model of the crista ampullaris, a sensory surface that detects head rotations.

    PubMed

    Marianelli, Prisca; Berthoz, Alain; Bennequin, Daniel

    2015-02-01

    The crista ampullaris is the epithelium at the end of the semicircular canals in the inner ear of vertebrates, which contains the sensory cells involved in the transduction of the rotational head movements into neuronal activity. The crista surface has the form of a saddle, or a pair of saddles separated by a crux, depending on the species and the canal considered. In birds, it was described as a catenoid by Landolt et al. (J Comp Neurol 159(2):257-287, doi: 10.1002/cne.901590207 , 1972). In the present work, we establish that this particular form results from principles of invariance maximization and energy minimization. The formulation of the invariance principle was inspired by Takumida (Biol Sci Space 15(4):356-358, 2001). More precisely, we suppose that in functional conditions, the equations of linear elasticity are valid, and we assume that in a certain domain of the cupula, in proximity of the crista surface, (1) the stress tensor of the deformed cupula is invariant under the gradient of the pressure, (2) the dissipation of energy is minimum. Then, we deduce that in this domain the crista surface is a minimal surface and that it must be either a planar, or helicoidal Scherk surface, or a piece of catenoid, which is the unique minimal surface of revolution. If we add the hypothesis that the direction of invariance of the stress tensor is unique and that a bilateral symmetry of the crista exists, only the catenoid subsists. This finding has important consequences for further functional modeling of the role of the vestibular system in head motion detection and spatial orientation.

  3. Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir

    2009-06-01

    Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.

  4. New double-byte error-correcting codes for memory systems

    NASA Technical Reports Server (NTRS)

    Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.

    1996-01-01

    Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.

  5. Map-invariant spectral analysis for the identification of DNA periodicities

    PubMed Central

    2012-01-01

    Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324

  6. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  7. Detection and analysis of diamond fingerprinting feature and its application

    NASA Astrophysics Data System (ADS)

    Li, Xin; Huang, Guoliang; Li, Qiang; Chen, Shengyi

    2011-01-01

    Before becoming a jewelry diamonds need to be carved artistically with some special geometric features as the structure of the polyhedron. There are subtle differences in the structure of this polyhedron in each diamond. With the spatial frequency spectrum analysis of diamond surface structure, we can obtain the diamond fingerprint information which represents the "Diamond ID" and has good specificity. Based on the optical Fourier Transform spatial spectrum analysis, the fingerprinting identification of surface structure of diamond in spatial frequency domain was studied in this paper. We constructed both the completely coherent diamond fingerprinting detection system illuminated by laser and the partially coherent diamond fingerprinting detection system illuminated by led, and analyzed the effect of the coherence of light source to the diamond fingerprinting feature. We studied rotation invariance and translation invariance of the diamond fingerprinting and verified the feasibility of real-time and accurate identification of diamond fingerprint. With the profit of this work, we can provide customs, jewelers and consumers with a real-time and reliable diamonds identification instrument, which will curb diamond smuggling, theft and other crimes, and ensure the healthy development of the diamond industry.

  8. EEG-based driver fatigue detection using hybrid deep generic model.

    PubMed

    Phyo Phyo San; Sai Ho Ling; Rifai Chai; Tran, Yvonne; Craig, Ashley; Hung Nguyen

    2016-08-01

    Classification of electroencephalography (EEG)-based application is one of the important process for biomedical engineering. Driver fatigue is a major case of traffic accidents worldwide and considered as a significant problem in recent decades. In this paper, a hybrid deep generic model (DGM)-based support vector machine is proposed for accurate detection of driver fatigue. Traditionally, a probabilistic DGM with deep architecture is quite good at learning invariant features, but it is not always optimal for classification due to its trainable parameters are in the middle layer. Alternatively, Support Vector Machine (SVM) itself is unable to learn complicated invariance, but produces good decision surface when applied to well-behaved features. Consolidating unsupervised high-level feature extraction techniques, DGM and SVM classification makes the integrated framework stronger and enhance mutually in feature extraction and classification. The experimental results showed that the proposed DBN-based driver fatigue monitoring system achieves better testing accuracy of 73.29 % with 91.10 % sensitivity and 55.48 % specificity. In short, the proposed hybrid DGM-based SVM is an effective method for the detection of driver fatigue in EEG.

  9. Stability Assessment and Tuning of an Adaptively Augmented Classical Controller for Launch Vehicle Flight Control

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.

    2014-01-01

    Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.

  10. The effects of non-stationary noise on electromagnetic response estimates

    NASA Astrophysics Data System (ADS)

    Banks, R. J.

    1998-11-01

    The noise in natural electromagnetic time series is typically non-stationary. Sections of data with high magnetic noise levels bias impedances and generate unreliable error estimates. Sections containing noise that is coherent between electric and magnetic channels also produce inappropriate impedances and errors. The answer is to compute response values for data sections which are as short as is feasible, i.e. which are compatible both with the chosen bandwidth and with the need to over-determine the least-squares estimation of the impedance and coherence. Only those values that are reliable are selected, and the best single measure of the reliability of Earth impedance estimates is their temporal invariance, which is tested by the coherence between the measured and predicted electric fields. Complex demodulation is the method used here to explore the temporal structure of electromagnetic fields in the period range 20-6000 s. For periods above 300 s, noisy sections are readily identified in time series of impedance values. The corresponding estimates deviate strongly from the normal value, are biased towards low impedance values, and are associated with low coherences. Plots of the impedance against coherence are particularly valuable diagnostic aids. For periods below 300 s, impedance bias increases systematically as the coherence falls, identifying input channel noise as the cause. By selecting sections with high coherence (equivalent to the impedance being invariant over the section) unbiased impedances and realistic errors can be determined. The scatter in impedance values among high-coherence sections is due to noise that is coherent between input and output channels, implying the presence of two or more systems for which a consistent response can be defined. Where the Earth and noise responses are significantly different, it may be possible to improve estimates of the former by rejecting sections that do not generate satisfactory values for all the response elements.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  12. The Watchdog Task: Concurrent error detection using assertions

    NASA Technical Reports Server (NTRS)

    Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.

    1985-01-01

    The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.

  13. A Review of Research on Error Detection. Technical Report No. 540.

    ERIC Educational Resources Information Center

    Meyer, Linda A.

    A review was conducted of the research on error detection studies completed with children, adolescents, and young adults to determine at what age children begin to detect errors in texts. The studies were grouped according to the subjects' ages. The focus of the review was on the following aspects of each study: the hypothesis that guided the…

  14. Evaluating suggestibility to additive and contradictory misinformation following explicit error detection in younger and older adults.

    PubMed

    Huff, Mark J; Umanath, Sharda

    2018-06-01

    In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Intensity Conserving Spectral Fitting

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  16. Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments

    DTIC Science & Technology

    2015-01-02

    motion segmentation and change detection in crowd behavior. In particular we investigated Finite Time Lyapunov Exponents, Perron Frobenius Operator and...deformation tensor [11]. On the other hand, eigenfunctions of, the Perron Frobenius operator can be used to detect Almost Invariant Sets (AIS) which are... Perron Frobenius operator. Finally, Figure 1.12d shows the ergodic partitions (EP) obtained based on the eigenfunctions of the Koopman operator

  17. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  18. Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs

    DOE PAGES

    Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...

    2016-09-19

    Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less

  19. Dominant Drivers of GCMs Errors in the Simulation of South Asian Summer Monsoon

    NASA Astrophysics Data System (ADS)

    Ashfaq, Moetasim

    2017-04-01

    Accurate simulation of the South Asian summer monsoon (SAM) is a longstanding unresolved problem in climate modeling science. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to demonstrate that most of the simulation errors in the summer season and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation over land further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.

  20. Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.

    2017-07-01

    Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.

  1. Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui

    2016-09-19

    Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less

  2. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.

  3. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  4. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  5. Environmentally Adaptive UXO Detection and Classification Systems

    DTIC Science & Technology

    2016-04-01

    probability of false alarm ( Pfa ), as well as Receiver Op- erating Characteristic (ROC) curve and confusion matrix characteristics. The results of these...techniques at a false alarm probability of Pfa = 1× 10−3. X̃ = g(X). In this case, the problem remains invariant to the group of transformations G = { g : g(X...and observed target responses as well as the probability of detection versus SNR for both detection techniques at Pfa = 1× 10−3. with N = 128 and M = 50

  6. Remote logo detection using angle-distance histograms

    NASA Astrophysics Data System (ADS)

    Youn, Sungwook; Ok, Jiheon; Baek, Sangwook; Woo, Seongyoun; Lee, Chulhee

    2016-05-01

    Among all the various computer vision applications, automatic logo recognition has drawn great interest from industry as well as various academic institutions. In this paper, we propose an angle-distance map, which we used to develop a robust logo detection algorithm. The proposed angle-distance histogram is invariant against scale and rotation. The proposed method first used shape information and color characteristics to find the candidate regions and then applied the angle-distance histogram. Experiments show that the proposed method detected logos of various sizes and orientations.

  7. Using video recording to identify management errors in pediatric trauma resuscitation.

    PubMed

    Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon

    2006-03-01

    To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.

  8. Simultaneous message framing and error detection

    NASA Technical Reports Server (NTRS)

    Frey, A. H., Jr.

    1968-01-01

    Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.

  9. Multi-bits error detection and fast recovery in RISC cores

    NASA Astrophysics Data System (ADS)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  10. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less

  11. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  12. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.

  13. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352

  14. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  15. Confirmation of the Factor Structure and Measurement Invariance of the Children's Scale of Hostility and Aggression: Reactive/Proactive in Clinic-Referred Children With and Without Autism Spectrum Disorder.

    PubMed

    Farmer, Cristan A; Kaat, Aaron J; Mazurek, Micah O; Lainhart, Janet E; DeWitt, Mary Beth; Cook, Edwin H; Butter, Eric M; Aman, Michael G

    2016-02-01

    The measurement of aggression in its different forms (e.g., physical and verbal) and functions (e.g., impulsive and instrumental) is given little attention in subjects with developmental disabilities (DD). In this study, we confirm the factor structure of the Children's Scale for Hostility and Aggression: Reactive/Proactive (C-SHARP) and demonstrate measurement invariance (consistent performance across clinical groups) between clinic-referred groups with and without autism spectrum disorder (ASD). We also provide evidence of the construct validity of the C-SHARP. Caregivers provided C-SHARP, Child Behavior Checklist (CBCL), and Proactive/Reactive Rating Scale (PRRS) ratings for 644 children, adolescents, and young adults 2-21 years of age. Five types of measurement invariance were evaluated within a confirmatory factor analytic framework. Associations among the C-SHARP, CBCL, and PRRS were explored. The factor structure of the C-SHARP had a good fit to the data from both groups, and strict measurement invariance between ASD and non-ASD groups was demonstrated (i.e., equivalent structure, factor loadings, item intercepts and residuals, and latent variance/covariance between groups). The C-SHARP Problem Scale was more strongly associated with CBCL Externalizing than with CBCL Internalizing, supporting its construct validity. Subjects classified with the PRRS as both Reactive and Proactive had significantly higher C-SHARP Proactive Scores than those classified as Reactive only, who were rated significantly higher than those classified by the PRRS as Neither Reactive nor Proactive. A similar pattern was observed for the C-SHARP Reactive Score. This study provided evidence of the validity of the C-SHARP through confirmation of its factor structure and its relationship with more established scales. The demonstration of measurement invariance demonstrates that differences in C-SHARP factor scores were the result of differences in the construct rather than to error or unmeasured/nuisance variables. These data suggest that the C-SHARP is useful for quantifying subtypes of aggressive behavior in children, adolescents, and young adults with DD.

  16. Cognitive Fusion Questionnaire-Body Image: Psychometric Properties and Its Incremental Power in the Prediction of Binge Eating Severity.

    PubMed

    Lucena-Santos, Paola; Trindade, Inês A; Oliveira, Margareth; Pinto-Gouveia, José

    2017-05-19

    Given the clinical usefulness of the CFQ-BI (Cognitive Fusion Questionnaire-Body Image; the only existing measure to assess the body-image-related cognitive fusion), the present study aimed to confirm its one-factor structure, to verify its measurement invariance between clinical and non-clinical samples, to analyze its internal consistency and sensitivity to detect differences between samples, as well as to explore the incremental and convergent validities of the CFQ-BI scores in Brazilian samples.  This was a cross-sectional study, which was conducted in clinical (women with overweight or obesity in treatment for weight loss) and non-clinical samples (women from the general population). The one-factor structure was confirmed showing factorial measurement invariance across clinical and non-clinical samples. The CFQ-BI scores presented an excellent internal consistency, were able to discriminate clinical and non-clinical samples, and were positively associated with binge eating severity, general cognitive fusion, and psychological inflexibility. Furthermore, body-image-related cognitive fusion scores (CFQ-BI) presented incremental validity over a general measure of cognitive fusion in the prediction of binge eating symptoms. This study demonstrated that CFQ-BI is a short scale with reliable and robust scores in Brazilian samples, presenting incremental and convergent validities, measurement invariance, and sensitivity to detect differences between clinical and non-clinical groups of women, enabling comparative studies between them.

  17. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  18. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  19. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  20. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Charged-pion cross sections and double-helicity asymmetries in polarized p + p collisions at √s = 200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adare, A.; Aidala, C.; Ajitanand, N. N.

    2015-02-02

    We present midrapidity charged-pion invariant cross sections, the ratio of the π⁻ to π⁺ cross sections and the charge-separated double-spin asymmetries in polarized p+p collisions at √s = 200 GeV. While the cross section measurements are consistent within the errors of next-to-leadingorder (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations over estimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor dependent pion fragmentation functions. Thus, the charge-separated pion asymmetries presented heremore » sample an x range of ~0.03–0.16 and provide unique information on the sign of the gluon-helicity distribution.« less

  2. Charged-pion cross sections and double-helicity asymmetries in polarized p +p collisions at √{s }=200 GeV

    NASA Astrophysics Data System (ADS)

    Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Al-Ta'Ani, H.; Alexander, J.; Andrews, K. R.; Angerami, A.; Aoki, K.; Apadula, N.; Appelt, E.; Aramaki, Y.; Armendariz, R.; Aschenauer, E. C.; Atomssa, E. T.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Bannier, B.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belmont, R.; Ben-Benjamin, J.; Bennett, R.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Broxmeyer, D.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Castera, P.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dairaku, S.; Datta, A.; David, G.; Dayananda, M. K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; D'Orazio, L.; Efremenko, Y. V.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fukao, Y.; Fusayasu, T.; Gal, C.; Garishvili, I.; Giordano, F.; Glenn, A.; Gong, X.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guo, L.; Gustafsson, H.-Å.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Han, R.; Hanks, J.; Harper, C.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hollis, R. S.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hornback, D.; Huang, S.; Ichihara, T.; Ichimiya, R.; Iinuma, H.; Ikeda, Y.; Imai, K.; Inaba, M.; Iordanova, A.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanischev, D.; Iwanaga, Y.; Jacak, B. V.; Jia, J.; Jiang, X.; John, D.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Kamin, J.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kazantsev, A. V.; Kempel, T.; Khanzadeev, A.; Kijima, K. M.; Kim, B. I.; Kim, D. J.; Kim, E.-J.; Kim, Y.-J.; Kim, Y. K.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kleinjan, D.; Kline, P.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Kotov, D.; Král, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, S. H.; Lee, S. R.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lim, S. H.; Linden Levy, L. A.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mitchell, J. T.; Miyachi, Y.; Mohanty, A. K.; Moon, H. J.; Morino, Y.; Morreale, A.; Morrison, D. P.; Motschwiller, S.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Newby, J.; Nguyen, M.; Nihashi, M.; Nouicer, R.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Ogilvie, C. A.; Oka, M.; Okada, K.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, S. K.; Pate, S. F.; Patel, L.; Pei, H.; Peng, J.-C.; Pereira, H.; Peressounko, D. Yu.; Petti, R.; Pinkenburg, C.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ravinovich, I.; Read, K. F.; Reygers, K.; Riabov, V.; Riabov, Y.; Richardson, E.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosendahl, S. S. E.; Rubin, J. G.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Samsonov, V.; Sano, S.; Sarsour, M.; Sato, T.; Savastio, M.; Sawada, S.; Sedgwick, K.; Seidl, R.; Seto, R.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shim, H. H.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Sodre, T.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sun, J.; Sziklai, J.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tennant, E.; Themann, H.; Thomas, D.; Togawa, M.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Utsunomiya, K.; Vale, C.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Yamaguchi, Y. L.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; Yoo, J. S.; You, Z.; Young, G. R.; Younus, I.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zhou, S.; Phenix Collaboration

    2015-02-01

    We present midrapidity charged-pion invariant cross sections, the ratio of the π- to π+ cross sections and the charge-separated double-spin asymmetries in polarized p +p collisions at √{s }=200 GeV . While the cross section measurements are consistent within the errors of next-to-leading-order (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations overestimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor-dependent pion fragmentation functions. The charge-separated pion asymmetries presented here sample an x range of ˜0.03 - 0.16 and provide unique information on the sign of the gluon-helicity distribution.

  3. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  4. An invariability-area relationship sheds new light on the spatial scaling of ecological stability.

    PubMed

    Wang, Shaopeng; Loreau, Michel; Arnoldi, Jean-Francois; Fang, Jingyun; Rahman, K Abd; Tao, Shengli; de Mazancourt, Claire

    2017-05-19

    The spatial scaling of stability is key to understanding ecological sustainability across scales and the sensitivity of ecosystems to habitat destruction. Here we propose the invariability-area relationship (IAR) as a novel approach to investigate the spatial scaling of stability. The shape and slope of IAR are largely determined by patterns of spatial synchrony across scales. When synchrony decays exponentially with distance, IARs exhibit three phases, characterized by steeper increases in invariability at both small and large scales. Such triphasic IARs are observed for primary productivity from plot to continental scales. When synchrony decays as a power law with distance, IARs are quasilinear on a log-log scale. Such quasilinear IARs are observed for North American bird biomass at both species and community levels. The IAR provides a quantitative tool to predict the effects of habitat loss on population and ecosystem stability and to detect regime shifts in spatial ecological systems, which are goals of relevance to conservation and policy.

  5. Effect of strong disorder on three-dimensional chiral topological insulators: Phase diagrams, maps of the bulk invariant, and existence of topological extended bulk states

    NASA Astrophysics Data System (ADS)

    Song, Juntao; Fine, Carolyn; Prodan, Emil

    2014-11-01

    The effect of strong disorder on chiral-symmetric three-dimensional lattice models is investigated via analytical and numerical methods. The phase diagrams of the models are computed using the noncommutative winding number, as functions of disorder strength and model's parameters. The localized/delocalized characteristic of the quantum states is probed with level statistics analysis. Our study reconfirms the accurate quantization of the noncommutative winding number in the presence of strong disorder, and its effectiveness as a numerical tool. Extended bulk states are detected above and below the Fermi level, which are observed to undergo the so-called "levitation and pair annihilation" process when the system is driven through a topological transition. This suggests that the bulk invariant is carried by these extended states, in stark contrast with the one-dimensional case where the extended states are completely absent and the bulk invariant is carried by the localized states.

  6. Is there any electrophysiological evidence for subliminal error processing?

    PubMed

    Shalgi, Shani; Deouell, Leon Y

    2013-08-29

    The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.

  7. Activity Tracking for Pilot Error Detection from Flight Data

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Ashford, Rose (Technical Monitor)

    2002-01-01

    This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.

  8. Medical Image Tamper Detection Based on Passive Image Authentication.

    PubMed

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  9. Aspergillus and Penicillium identification using DNA sequences: Barcode or MLST?

    USDA-ARS?s Scientific Manuscript database

    Current methods in DNA technology can detect single nucleotide polymorphisms with measurable accuracy using several different approaches appropriate for different uses. If there are even single nucleotide differences that are invariant markers of the species, we can accomplish identification through...

  10. Shadow detection of moving objects based on multisource information in Internet of things

    NASA Astrophysics Data System (ADS)

    Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian

    2017-05-01

    Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.

  11. Key management and encryption under the bounded storage model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less

  12. Communication: Recovering the flat-plane condition in electronic structure theory at semi-local DFT cost

    NASA Astrophysics Data System (ADS)

    Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.

    2017-11-01

    The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.

  13. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  14. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  15. 3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish

    PubMed Central

    von der Emde, Gerhard; Behr, Katharina; Bouton, Béatrice; Engelmann, Jacob; Fetz, Steffen; Folde, Caroline

    2010-01-01

    Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation. PMID:20577635

  16. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  17. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.

  18. Hybrid online sensor error detection and functional redundancy for systems with time-varying parameters.

    PubMed

    Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali

    2017-12-01

    Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.

  19. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  20. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  1. Measurement invariance of TGMD-3 in children with and without mental and behavioral disorders.

    PubMed

    Magistro, Daniele; Piumatti, Giovanni; Carlevaro, Fabio; Sherar, Lauren B; Esliger, Dale W; Bardaglio, Giulia; Magno, Francesca; Zecca, Massimiliano; Musella, Giovanni

    2018-05-24

    This study evaluated whether the Test of Gross Motor Development 3 (TGMD-3) is a reliable tool to compare children with and without mental and behavioral disorders across gross motor skill domains. A total of 1,075 children (aged 3-11 years), 98 with mental and behavioral disorders and 977 without (typically developing), were included in the analyses. The TGMD-3 evaluates fundamental gross motor skills of children across two domains: locomotor skills and ball skills. Two independent testers simultaneously observed children's performances (agreement over 95%). Each child completed one practice and then two formal trials. Scores were recorded only during the two formal trials. Multigroup confirmatory factor analysis tested the assumption of TGMD-3 measurement invariance across disability groups. According to the magnitude of changes in root mean square error of approximation and comparative fit index between nested models, the assumption of measurement invariance across groups was valid. Loadings of the manifest indicators on locomotor and ball skills were significant (p < .001) in both groups. Item response theory analysis showed good reliability results across locomotor and the ball skills full latent traits. The present study confirmed the factorial structure of TGMD-3 and demonstrated its feasibility across normally developing children and children with mental and behavioral disorders. These findings provide new opportunities for understanding the effect of specific intervention strategies on this population. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Dimensionless, Scale Invariant, Edge Weight Metric for the Study of Complex Structural Networks

    PubMed Central

    Colon-Perez, Luis M.; Spindler, Caitlin; Goicochea, Shelby; Triplett, William; Parekh, Mansi; Montie, Eric; Carney, Paul R.; Price, Catherine; Mareci, Thomas H.

    2015-01-01

    High spatial and angular resolution diffusion weighted imaging (DWI) with network analysis provides a unique framework for the study of brain structure in vivo. DWI-derived brain connectivity patterns are best characterized with graph theory using an edge weight to quantify the strength of white matter connections between gray matter nodes. Here a dimensionless, scale-invariant edge weight is introduced to measure node connectivity. This edge weight metric provides reasonable and consistent values over any size scale (e.g. rodents to humans) used to quantify the strength of connection. Firstly, simulations were used to assess the effects of tractography seed point density and random errors in the estimated fiber orientations; with sufficient signal-to-noise ratio (SNR), edge weight estimates improve as the seed density increases. Secondly to evaluate the application of the edge weight in the human brain, ten repeated measures of DWI in the same healthy human subject were analyzed. Mean edge weight values within the cingulum and corpus callosum were consistent and showed low variability. Thirdly, using excised rat brains to study the effects of spatial resolution, the weight of edges connecting major structures in the temporal lobe were used to characterize connectivity in this local network. The results indicate that with adequate resolution and SNR, connections between network nodes are characterized well by this edge weight metric. Therefore this new dimensionless, scale-invariant edge weight metric provides a robust measure of network connectivity that can be applied in any size regime. PMID:26173147

  3. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    PubMed

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Dimensionless, Scale Invariant, Edge Weight Metric for the Study of Complex Structural Networks.

    PubMed

    Colon-Perez, Luis M; Spindler, Caitlin; Goicochea, Shelby; Triplett, William; Parekh, Mansi; Montie, Eric; Carney, Paul R; Price, Catherine; Mareci, Thomas H

    2015-01-01

    High spatial and angular resolution diffusion weighted imaging (DWI) with network analysis provides a unique framework for the study of brain structure in vivo. DWI-derived brain connectivity patterns are best characterized with graph theory using an edge weight to quantify the strength of white matter connections between gray matter nodes. Here a dimensionless, scale-invariant edge weight is introduced to measure node connectivity. This edge weight metric provides reasonable and consistent values over any size scale (e.g. rodents to humans) used to quantify the strength of connection. Firstly, simulations were used to assess the effects of tractography seed point density and random errors in the estimated fiber orientations; with sufficient signal-to-noise ratio (SNR), edge weight estimates improve as the seed density increases. Secondly to evaluate the application of the edge weight in the human brain, ten repeated measures of DWI in the same healthy human subject were analyzed. Mean edge weight values within the cingulum and corpus callosum were consistent and showed low variability. Thirdly, using excised rat brains to study the effects of spatial resolution, the weight of edges connecting major structures in the temporal lobe were used to characterize connectivity in this local network. The results indicate that with adequate resolution and SNR, connections between network nodes are characterized well by this edge weight metric. Therefore this new dimensionless, scale-invariant edge weight metric provides a robust measure of network connectivity that can be applied in any size regime.

  5. Application of symbolic computations to the constitutive modeling of structural materials

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Tan, H. Q.; Dong, X.

    1990-01-01

    In applications involving elevated temperatures, the derivation of mathematical expressions (constitutive equations) describing the material behavior can be quite time consuming, involved and error-prone. Therefore intelligent application of symbolic systems to faciliate this tedious process can be of significant benefit. Presented here is a problem oriented, self contained symbolic expert system, named SDICE, which is capable of efficiently deriving potential based constitutive models in analytical form. This package, running under DOE MACSYMA, has the following features: (1) potential differentiation (chain rule), (2) tensor computations (utilizing index notation) including both algebraic and calculus; (3) efficient solution of sparse systems of equations; (4) automatic expression substitution and simplification; (5) back substitution of invariant and tensorial relations; (6) the ability to form the Jacobian and Hessian matrix; and (7) a relational data base. Limited aspects of invariant theory were also incorporated into SDICE due to the utilization of potentials as a starting point and the desire for these potentials to be frame invariant (objective). The uniqueness of SDICE resides in its ability to manipulate expressions in a general yet pre-defined order and simplify expressions so as to limit expression growth. Results are displayed, when applicable, utilizing index notation. SDICE was designed to aid and complement the human constitutive model developer. A number of examples are utilized to illustrate the various features contained within SDICE. It is expected that this symbolic package can and will provide a significant incentive to the development of new constitutive theories.

  6. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  7. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  8. Status report on speech research. A report on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications

    NASA Astrophysics Data System (ADS)

    Liberman, A. M.

    1984-08-01

    This report (1 January-30 June) is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. Manuscripts cover the following topics: Sources of variability in early speech development; Invariance: Functional or descriptive?; Brief comments on invariance in phonetic perception; Phonetic category boundaries are flexible; On categorizing asphasic speech errors; Universal and language particular aspects of vowel-to-vowel coarticulation; Functional specific articulatory cooperation following jaw perturbation; during speech: Evidence for coordinative structures; Formant integration and the perception of nasal vowel height; Relative power of cues: FO shifts vs. voice timing; Laryngeal management at utterance-internal word boundary in American English; Closure duration and release burst amplitude cues to stop consonant manner and place of articulation; Effects of temporal stimulus properties on perception of the (sl)-(spl) distinction; The physics of controlled conditions: A reverie about locomotion; On the perception of intonation from sinusoidal sentences; Speech Perception; Speech Articulation; Motor Control; Speech Development.

  9. Contact-free palm-vein recognition based on local invariant features.

    PubMed

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.

  10. A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    PubMed Central

    Seenivasagam, V.; Velumani, R.

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943

  11. A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows

    NASA Astrophysics Data System (ADS)

    Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin

    The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.

  12. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    PubMed

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  13. Contact-Free Palm-Vein Recognition Based on Local Invariant Features

    PubMed Central

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach. PMID:24866176

  14. An application of the LC-LSTM framework to the self-esteem instability case.

    PubMed

    Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John

    2013-10-01

    The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.

  15. Two-dimensional shape recognition using oriented-polar representation

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li

    1997-10-01

    To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.

  16. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    NASA Astrophysics Data System (ADS)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  17. On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A

    NASA Astrophysics Data System (ADS)

    Blake, J. B.; Mandel, R.

    1987-02-01

    The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.

  18. Mathematics in medicine: tumor detection, radiation dosimetry, and simulation in psychotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellman, R.; Kashef, B.; Smith, C.P.

    1975-05-01

    Work done in the application of mathematics to medicine over the last 20 years is briefly reviewed. Scan-rescan processes, radiation dosimetry, and medical interviewing are discussed. The first uses dynamic programming, the second invariant imbedding, and the third simulation. (ACR)

  19. Relationship auditing of the FMA ontology

    PubMed Central

    Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai

    2010-01-01

    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727

  20. The gravity field model IGGT_R1 based on the second invariant of the GOCE gravitational gradient tensor

    NASA Astrophysics Data System (ADS)

    Lu, Biao; Luo, Zhicai; Zhong, Bo; Zhou, Hao; Flechtner, Frank; Förste, Christoph; Barthelmes, Franz; Zhou, Rui

    2017-11-01

    Based on tensor theory, three invariants of the gravitational gradient tensor (IGGT) are independent of the gradiometer reference frame (GRF). Compared to traditional methods for calculation of gravity field models based on the gravity field and steady-state ocean circulation explorer (GOCE) data, which are affected by errors in the attitude indicator, using IGGT and least squares method avoids the problem of inaccurate rotation matrices. The IGGT approach as studied in this paper is a quadratic function of the gravity field model's spherical harmonic coefficients. The linearized observation equations for the least squares method are obtained using a Taylor expansion, and the weighting equation is derived using the law of error propagation. We also investigate the linearization errors using existing gravity field models and find that this error can be ignored since the used a-priori model EIGEN-5C is sufficiently accurate. One problem when using this approach is that it needs all six independent gravitational gradients (GGs), but the components V_{xy} and V_{yz} of GOCE are worse due to the non-sensitive axes of the GOCE gradiometer. Therefore, we use synthetic GGs for both inaccurate gravitational gradient components derived from the a-priori gravity field model EIGEN-5C. Another problem is that the GOCE GGs are measured in a band-limited manner. Therefore, a forward and backward finite impulse response band-pass filter is applied to the data, which can also eliminate filter caused phase change. The spherical cap regularization approach (SCRA) and the Kaula rule are then applied to solve the polar gap problem caused by GOCE's inclination of 96.7° . With the techniques described above, a degree/order 240 gravity field model called IGGT_R1 is computed. Since the synthetic components of V_{xy} and V_{yz} are not band-pass filtered, the signals outside the measurement bandwidth are replaced by the a-priori model EIGEN-5C. Therefore, this model is practically a combined gravity field model which contains GOCE GGs signals and long wavelength signals from the a-priori model EIGEN-5C. Finally, IGGT_R1's accuracy is evaluated by comparison with other gravity field models in terms of difference degree amplitudes, the geostrophic velocity in the Agulhas current area, gravity anomaly differences as well as by comparison to GNSS/leveling data.

  1. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less

  2. Comparison of direct and heterodyne detection optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Gardner, C. S.

    1987-01-01

    The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.

  3. TH-B-BRC-00: How to Identify and Resolve Potential Clinical Errors Before They Impact Patients Treatment: Lessons Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2016-06-15

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less

  4. TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, I.

    2016-06-15

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less

  5. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  6. Psychometric properties of a short form of the Center for Epidemiologic Studies Depression (CES-D-10) scale for screening depressive symptoms in healthy community dwelling older adults.

    PubMed

    Mohebbi, Mohammadreza; Nguyen, Van; McNeil, John J; Woods, Robyn L; Nelson, Mark R; Shah, Raj C; Storey, Elsdon; Murray, Anne M; Reid, Christopher M; Kirpach, Brenda; Wolfe, Rory; Lockery, Jessica E; Berk, Michael

    The 10-item Center for the Epidemiological Studies of Depression Short Form (CES-D-10) is a widely used self-report measure of depression symptomatology. The aim of this study is to investigate the psychometric properties of the CES-D-10 in healthy community dwelling older adults. The sample consists of 19,114 community-based individuals residing in Australia and the United States who participated in the ASPREE trial baseline assessment. All individuals were free of any major illness at the time. We evaluated construct validity by performing confirmatory factor analysis, examined measurement invariance across country and gender followed by evaluating item discrimination bias in age, gender, race, ethnicity and education level, and assessing internal consistency. High item-total correlations and Cronbach's alpha indicated high internal consistency. The factor analyses suggested a unidimensional factor structure. Construct validity was supported in the overall sample, and by country and gender sub-groups. The CES-D-10 was invariant across countries, and although evidence of marginal gender non-invariance was observed there was no evidence of notable gender specific item discrimination bias. No notable differences in discrimination parameters or group membership measurement non-invariance were detected by gender, age, race, ethnicity, and education level. These findings suggest the CES-D-10 is a reliable and valid measure of depression in a volunteer sample. No noteworthy evidence of invariance and/or item discrimination bias is observed across gender, age, race, language and ethnic groups. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A limit on the variation of the speed of light arising from quantum gravity effects.

    PubMed

    Abdo, A A; Ackermann, M; Ajello, M; Asano, K; Atwood, W B; Axelsson, M; Baldini, L; Ballet, J; Barbiellini, G; Baring, M G; Bastieri, D; Bechtol, K; Bellazzini, R; Berenji, B; Bhat, P N; Bissaldi, E; Bloom, E D; Bonamente, E; Bonnell, J; Borgland, A W; Bouvier, A; Bregeon, J; Brez, A; Briggs, M S; Brigida, M; Bruel, P; Burgess, J M; Burnett, T H; Caliandro, G A; Cameron, R A; Caraveo, P A; Casandjian, J M; Cecchi, C; Celik, O; Chaplin, V; Charles, E; Cheung, C C; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; Cominsky, L R; Connaughton, V; Conrad, J; Cutini, S; Dermer, C D; de Angelis, A; de Palma, F; Digel, S W; Dingus, B L; do Couto E Silva, E; Drell, P S; Dubois, R; Dumora, D; Farnier, C; Favuzzi, C; Fegan, S J; Finke, J; Fishman, G; Focke, W B; Foschini, L; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Gehrels, N; Germani, S; Gibby, L; Giebels, B; Giglietto, N; Giordano, F; Glanzman, T; Godfrey, G; Granot, J; Greiner, J; Grenier, I A; Grondin, M-H; Grove, J E; Grupe, D; Guillemot, L; Guiriec, S; Hanabata, Y; Harding, A K; Hayashida, M; Hays, E; Hoversten, E A; Hughes, R E; Jóhannesson, G; Johnson, A S; Johnson, R P; Johnson, W N; Kamae, T; Katagiri, H; Kataoka, J; Kawai, N; Kerr, M; Kippen, R M; Knödlseder, J; Kocevski, D; Kouveliotou, C; Kuehn, F; Kuss, M; Lande, J; Latronico, L; Lemoine-Goumard, M; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Makeev, A; Mazziotta, M N; McBreen, S; McEnery, J E; McGlynn, S; Mészáros, P; Meurer, C; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monte, C; Monzani, M E; Moretti, E; Morselli, A; Moskalenko, I V; Murgia, S; Nakamori, T; Nolan, P L; Norris, J P; Nuss, E; Ohno, M; Ohsugi, T; Omodei, N; Orlando, E; Ormes, J F; Ozaki, M; Paciesas, W S; Paneque, D; Panetta, J H; Parent, D; Pelassa, V; Pepe, M; Pesce-Rollins, M; Petrosian, V; Piron, F; Porter, T A; Preece, R; Rainò, S; Ramirez-Ruiz, E; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Ritz, S; Rochester, L S; Rodriguez, A Y; Roth, M; Ryde, F; Sadrozinski, H F-W; Sanchez, D; Sander, A; Saz Parkinson, P M; Scargle, J D; Schalk, T L; Sgrò, C; Siskind, E J; Smith, D A; Smith, P D; Spandre, G; Spinelli, P; Stamatikos, M; Stecker, F W; Strickman, M S; Suson, D J; Tajima, H; Takahashi, H; Takahashi, T; Tanaka, T; Thayer, J B; Thayer, J G; Thompson, D J; Tibaldo, L; Toma, K; Torres, D F; Tosti, G; Troja, E; Uchiyama, Y; Uehara, T; Usher, T L; van der Horst, A J; Vasileiou, V; Vilchez, N; Vitale, V; von Kienlin, A; Waite, A P; Wang, P; Wilson-Hodge, C; Winer, B L; Wood, K S; Wu, X F; Yamazaki, R; Ylinen, T; Ziegler, M

    2009-11-19

    A cornerstone of Einstein's special relativity is Lorentz invariance-the postulate that all observers measure exactly the same speed of light in vacuum, independent of photon-energy. While special relativity assumes that there is no fundamental length-scale associated with such invariance, there is a fundamental scale (the Planck scale, l(Planck) approximately 1.62 x 10(-33) cm or E(Planck) = M(Planck)c(2) approximately 1.22 x 10(19) GeV), at which quantum effects are expected to strongly affect the nature of space-time. There is great interest in the (not yet validated) idea that Lorentz invariance might break near the Planck scale. A key test of such violation of Lorentz invariance is a possible variation of photon speed with energy. Even a tiny variation in photon speed, when accumulated over cosmological light-travel times, may be revealed by observing sharp features in gamma-ray burst (GRB) light-curves. Here we report the detection of emission up to approximately 31 GeV from the distant and short GRB 090510. We find no evidence for the violation of Lorentz invariance, and place a lower limit of 1.2E(Planck) on the scale of a linear energy dependence (or an inverse wavelength dependence), subject to reasonable assumptions about the emission (equivalently we have an upper limit of l(Planck)/1.2 on the length scale of the effect). Our results disfavour quantum-gravity theories in which the quantum nature of space-time on a very small scale linearly alters the speed of light.

  8. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei

    2018-04-01

    Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  9. Rate constant for the fraction of atomic chlorine with formaldehyde from 200 to 500K

    NASA Technical Reports Server (NTRS)

    Michael, J. V.; Nava, D. F.; Payne, W. A.; Stief, L. J.

    1978-01-01

    A flash photolysis - resonance fluorescence technique was used to measure rate constant. The results were independent of substantial variations in H2CO, total pressure (Ar), and flash intensity (i.e., initial Cl). The rate constant was shown to be invariant with temperature, the best representation for this temperature range being K = (7.48 + or - 0.50) x 10 to the minus 11 power cu cm molecule-1 s-1 where the error is one standard deviation. The rate constant is theoretically discussed and the potential importance of the reaction in stratospheric chemistry is considered.

  10. The bee's map of the e-vector pattern in the sky.

    PubMed

    Rossel, S; Wehner, R

    1982-07-01

    It has long been known that bees can use the pattern of polarized light in the sky as a compass cue even if they can see only a small part of the whole pattern. How they solve this problem has remained enigmatic. Here we show that the bees rely on a generalized celestial map that is used invariably throughout the day. We reconstruct this map by analyzing the navigation errors made by bees to which single e-vectors are displayed. In addition, we demonstrate how the bee's celestial map can be derived from the e-vector patterns in the sky.

  11. Pubface: Celebrity face identification based on deep learning

    NASA Astrophysics Data System (ADS)

    Ouanan, H.; Ouanan, M.; Aksasse, B.

    2018-05-01

    In this paper, we describe a new real time application called PubFace, which allows to recognize celebrities in public spaces by employs a new pose invariant face recognition deep neural network algorithm with an extremely low error rate. To build this application, we make the following contributions: firstly, we build a novel dataset with over five million faces labelled. Secondly, we fine tuning the deep convolutional neural network (CNN) VGG-16 architecture on our new dataset that we have built. Finally, we deploy this model on the Raspberry Pi 3 model B using the OpenCv dnn module (OpenCV 3.3).

  12. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  13. Detecting and correcting hard errors in a memory array

    DOEpatents

    Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.

    2015-11-19

    Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

  14. The use of self checks and voting in software error detection - An empirical study

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.

    1990-01-01

    The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.

  15. Error detection and response adjustment in youth with mild spastic cerebral palsy: an event-related brain potential study.

    PubMed

    Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J

    2013-06-01

    This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.

  16. MRI-guided prostate focal laser ablation therapy using a mechatronic needle guidance system

    NASA Astrophysics Data System (ADS)

    Cepek, Jeremy; Lindner, Uri; Ghai, Sangeet; Davidson, Sean R. H.; Trachtenberg, John; Fenster, Aaron

    2014-03-01

    Focal therapy of localized prostate cancer is receiving increased attention due to its potential for providing effective cancer control in select patients with minimal treatment-related side effects. Magnetic resonance imaging (MRI)-guided focal laser ablation (FLA) therapy is an attractive modality for such an approach. In FLA therapy, accurate placement of laser fibers is critical to ensuring that the full target volume is ablated. In practice, error in needle placement is invariably present due to pre- to intra-procedure image registration error, needle deflection, prostate motion, and variability in interventionalist skill. In addition, some of these sources of error are difficult to control, since the available workspace and patient positions are restricted within a clinical MRI bore. In an attempt to take full advantage of the utility of intraprocedure MRI, while minimizing error in needle placement, we developed an MRI-compatible mechatronic system for guiding needles to the prostate for FLA therapy. The system has been used to place interstitial catheters for MRI-guided FLA therapy in eight subjects in an ongoing Phase I/II clinical trial. Data from these cases has provided quantification of the level of uncertainty in needle placement error. To relate needle placement error to clinical outcome, we developed a model for predicting the probability of achieving complete focal target ablation for a family of parameterized treatment plans. Results from this work have enabled the specification of evidence-based selection criteria for the maximum target size that can be confidently ablated using this technique, and quantify the benefit that may be gained with improvements in needle placement accuracy.

  17. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT

    PubMed Central

    Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299

  18. Photogrammetric accuracy measurements of head holder systems used for fractionated radiotherapy.

    PubMed

    Menke, M; Hirschfeld, F; Mack, T; Pastyr, O; Sturm, V; Schlegel, W

    1994-07-30

    We describe how stereo photogrammetry can be used to determine immobilization and repositioning accuracies of head holder systems used for fractionated radiotherapy of intracranial lesions. The apparatus consists of two video cameras controlled by a personal computer and a bite block based landmark system. Position and spatial orientation of the landmarks are monitored by the cameras and processed for the real-time calculation of a target point's actual position relative to its initializing position. The target's position is assumed to be invariant with respect to the landmark system. We performed two series of 30 correlated head motion measurements on two test persons. One of the series was done with a thermoplastic device, the other one with a cast device developed for stereotactic treatment at the German Cancer Research Center. Immobilization and repositioning accuracies were determined with respect to a target point situated near the base of the skull. The repositioning accuracies were described in terms of the distributions of the mean displacements of the single motion measurements. Movements of the target in the order of 0.05 mm caused by breathing could be detected with a maximum resolution in time of 12 ms. The data derived from the investigation of the two test persons indicated similar immobilization accuracies for the two devices, but the repositioning errors were larger for the thermoplastic device than for the cast device. Apart from this, we found that for the thermoplastic mask the lateral repositioning error depended on the order in which the mask was closed. The photogrammetric apparatus is a versatile tool for accuracy measurements of head holder devices used for fractionated radiotherapy.

  19. Automatic Coregistration Algorithm to Remove Canopy Shaded Pixels in UAV-Borne Thermal Images to Improve the Estimation of Crop Water Stress Index of a Drip-Irrigated Cabernet Sauvignon Vineyard.

    PubMed

    Poblete, Tomas; Ortega-Farías, Samuel; Ryu, Dongryeol

    2018-01-30

    Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.

  20. Coherent detection of position errors in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu

    2007-09-01

    Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.

  1. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  2. Study on the special vision sensor for detecting position error in robot precise TIG welding of some key part of rocket engine

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng

    2005-01-01

    Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.

  3. The Shame and Guilt Scales of the Test of Self-Conscious Affect-Adolescent (TOSCA-A): Psychometric Properties for Responses from Children, and Measurement Invariance Across Children and Adolescents

    PubMed Central

    Watson, Shaun D.; Gomez, Rapson; Gullone, Eleonora

    2016-01-01

    This study examined various psychometric properties of the items comprising the shame and guilt scales of the Test of Self-Conscious Affect-Adolescent (TOSCA-A) in a group children between 8 and 11 years of age. A total of 699 children (367 females and 332 males) completed these scales, and also measures of depression and empathy. Confirmatory factor analysis (CFA) provided support for an oblique two-factor model, with the originally proposed shame and guilt items comprising shame and guilt factors, respectively. There was good internal consistency reliability for the shame and guilt scales, with omega coefficient values of 0.77 and 0.81 for shame and guilt, respectively. Also, shame correlated with depression symptoms positively (0.34, p < 0.001) and had no relation with empathy (-0.07, ns). Guilt correlated with depression symptoms negatively (-0.28, p < 0.001), and with empathy positively (0.13. p < 0.05). Thus there was support for the convergent and discriminant validity of the shame and guilt factors. Multiple-group CFA comparing this group of children with a separate group of adolescents (320 females and 242 males), based on the chi-square difference test, supported full metric invariance, the intercept invariance of 17 of the 30 shame and guilt items, and higher latent mean scores among children for both shame and guilt. The non-equivalency for intercepts and mean scores were of small effect sizes. Comparisons based on the difference in root mean squared error of approximation values supported full measurement invariance and no group difference for latent mean scores. The findings in the current study support the use of the TOSCA-A in children and the valid comparison of scores between children and adolescents, thereby opening up the possibility of evaluating change in the TOSCA-A shame and guilt factors over these developmental age groups. PMID:27242573

  4. Algorithms for Autonomous Plume Detection on Outer Planet Satellites

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bunte, M. K.; Saripalli, S.; Greeley, R.

    2011-12-01

    We investigate techniques for automated detection of geophysical events (i.e., volcanic plumes) from spacecraft images. The algorithms presented here have not been previously applied to detection of transient events on outer planet satellites. We apply Scale Invariant Feature Transform (SIFT) to raw images of Io and Enceladus from the Voyager, Galileo, Cassini, and New Horizons missions. SIFT produces distinct interest points in every image; feature descriptors are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. We classified these descriptors as plumes using the k-nearest neighbor (KNN) algorithm. In KNN, an object is classified by its similarity to examples in a training set of images based on user defined thresholds. Using the complete database of Io images and a selection of Enceladus images where 1-3 plumes were manually detected in each image, we successfully detected 74% of plumes in Galileo and New Horizons images, 95% in Voyager images, and 93% in Cassini images. Preliminary tests yielded some false positive detections; further iterations will improve performance. In images where detections fail, plumes are less than 9 pixels in size or are lost in image glare. We compared the appearance of plumes and illuminated mountain slopes to determine the potential for feature classification. We successfully differentiated features. An advantage over other methods is the ability to detect plumes in non-limb views where they appear in the shadowed part of the surface; improvements will enable detection against the illuminated background surface where gradient changes would otherwise preclude detection. This detection method has potential applications to future outer planet missions for sustained plume monitoring campaigns and onboard automated prioritization of all spacecraft data. The complementary nature of this method is such that it could be used in conjunction with edge detection algorithms to increase effectiveness. We have demonstrated an ability to detect transient events above the planetary limb and on the surface and to distinguish feature classes in spacecraft images.

  5. Is there any electrophysiological evidence for subliminal error processing?

    PubMed Central

    Shalgi, Shani; Deouell, Leon Y.

    2013-01-01

    The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548

  6. Magnetic-field sensing with quantum error detection under the effect of energy relaxation

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Yuichiro; Benjamin, Simon

    2017-03-01

    A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.

  7. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    NASA Astrophysics Data System (ADS)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  8. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  9. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.

    PubMed

    McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D

    2016-01-08

    We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.

  10. Efficient detection of dangling pointer error for C/C++ programs

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  11. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  12. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection

    PubMed Central

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709

  13. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection.

    PubMed

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.

  14. Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability.

    PubMed

    Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao

    2014-10-01

    In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.

  15. Robust feature matching via support-line voting and affine-invariant ratios

    NASA Astrophysics Data System (ADS)

    Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei

    2017-10-01

    Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.

  16. Detecting genotyping errors and describing black bear movement in northern Idaho

    Treesearch

    Michael K. Schwartz; Samuel A. Cushman; Kevin S. McKelvey; Jim Hayden; Cory Engkjer

    2006-01-01

    Non-invasive genetic sampling has become a favored tool to enumerate wildlife. Genetic errors, caused by poor quality samples, can lead to substantial biases in numerical estimates of individuals. We demonstrate how the computer program DROPOUT can detect amplification errors (false alleles and allelic dropout) in a black bear (Ursus americanus) dataset collected in...

  17. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Nyflot, M; Ford, E

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less

  18. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  19. GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach

    PubMed Central

    Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin

    2016-01-01

    As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367

  20. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    PubMed

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  1. SU-E-T-392: Evaluation of Ion Chamber/film and Log File Based QA to Detect Delivery Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, C; Mason, B; Kirsner, S

    2015-06-15

    Purpose: Ion chamber and film (ICAF) is a method used to verify patient dose prior to treatment. More recently, log file based QA has been shown as an alternative for measurement based QA. In this study, we delivered VMAT plans with and without errors to determine if ICAF and/or log file based QA was able to detect the errors. Methods: Using two VMAT patients, the original treatment plan plus 7 additional plans with delivery errors introduced were generated and delivered. The erroneous plans had gantry, collimator, MLC, gantry and collimator, collimator and MLC, MLC and gantry, and gantry, collimator, andmore » MLC errors. The gantry and collimator errors were off by 4{sup 0} for one of the two arcs. The MLC error introduced was one in which the opening aperture didn’t move throughout the delivery of the field. For each delivery, an ICAF measurement was made as well as a dose comparison based upon log files. Passing criteria to evaluate the plans were ion chamber less and 5% and film 90% of pixels pass the 3mm/3% gamma analysis(GA). For log file analysis 90% of voxels pass the 3mm/3% 3D GA and beam parameters match what was in the plan. Results: Two original plans were delivered and passed both ICAF and log file base QA. Both ICAF and log file QA met the dosimetry criteria on 4 of the 12 erroneous cases analyzed (2 cases were not analyzed). For the log file analysis, all 12 erroneous plans alerted a mismatch in delivery versus what was planned. The 8 plans that didn’t meet criteria all had MLC errors. Conclusion: Our study demonstrates that log file based pre-treatment QA was able to detect small errors that may not be detected using an ICAF and both methods of were able to detect larger delivery errors.« less

  2. How do Community Pharmacies Recover from E-prescription Errors?

    PubMed Central

    Odukoya, Olufunmilola K.; Stone, Jamie A.; Chui, Michelle A.

    2014-01-01

    Background The use of e-prescribing is increasing annually, with over 788 million e-prescriptions received in US pharmacies in 2012. Approximately 9% of e-prescriptions have medication errors. Objective To describe the process used by community pharmacy staff to detect, explain, and correct e-prescription errors. Methods The error recovery conceptual framework was employed for data collection and analysis. 13 pharmacists and 14 technicians from five community pharmacies in Wisconsin participated in the study. A combination of data collection methods were utilized, including direct observations, interviews, and focus groups. The transcription and content analysis of recordings were guided by the three-step error recovery model. Results Most of the e-prescription errors were detected during the entering of information into the pharmacy system. These errors were detected by both pharmacists and technicians using a variety of strategies which included: (1) performing double checks of e-prescription information; (2) printing the e-prescription to paper and confirming the information on the computer screen with information from the paper printout; and (3) using colored pens to highlight important information. Strategies used for explaining errors included: (1) careful review of patient’ medication history; (2) pharmacist consultation with patients; (3) consultation with another pharmacy team member; and (4) use of online resources. In order to correct e-prescription errors, participants made educated guesses of the prescriber’s intent or contacted the prescriber via telephone or fax. When e-prescription errors were encountered in the community pharmacies, the primary goal of participants was to get the order right for patients by verifying the prescriber’s intent. Conclusion Pharmacists and technicians play an important role in preventing e-prescription errors through the detection of errors and the verification of prescribers’ intent. Future studies are needed to examine factors that facilitate or hinder recovery from e-prescription errors. PMID:24373898

  3. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  4. Music and Piaget: Spinning a Slender Thread.

    ERIC Educational Resources Information Center

    Wohlwill, Joachim F.

    Repeated but unadvised attempts have been made by music educators to relate the Piagetian concept of concrete operational thought to children's understanding of music. The attempts have been focused on the apparent link between the child's detection of invariance in musical patterns and the concept of conservation. These attempts are unadvised…

  5. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  6. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  7. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  8. The effect of multi-channel wide dynamic range compression, noise reduction, and the directional microphone on horizontal localization performance in hearing aid wearers.

    PubMed

    Keidser, Gitte; Rohrseitz, Kristin; Dillon, Harvey; Hamacher, Volkmar; Carter, Lyndal; Rass, Uwe; Convery, Elizabeth

    2006-10-01

    This study examined the effect that signal processing strategies used in modern hearing aids, such as multi-channel WDRC, noise reduction, and directional microphones have on interaural difference cues and horizontal localization performance relative to linear, time-invariant amplification. Twelve participants were bilaterally fitted with BTE devices. Horizontal localization testing using a 360 degrees loudspeaker array and broadband pulsed pink noise was performed two weeks, and two months, post-fitting. The effect of noise reduction was measured with a constant noise present at 80 degrees azimuth. Data were analysed independently in the left/right and front/back dimension and showed that of the three signal processing strategies, directional microphones had the most significant effect on horizontal localization performance and over time. Specifically, a cardioid microphone could decrease front/back errors over time, whereas left/right errors increased when different microphones were fitted to left and right ears. Front/back confusions were generally prominent. Objective measurements of interaural differences on KEMAR explained significant shifts in left/right errors. In conclusion, there is scope for improving the sense of localization in hearing aid users.

  9. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  10. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  11. Atomicity violation detection using access interleaving invariants

    DOEpatents

    Zhou, Yuanyuan; Lu, Shan; Tucek, Joseph Andrew

    2013-09-10

    During execution of a program, the situation where the atomicity of a pair of instructions that are to be executed atomically is violated is identified, and a bug is detected as occurring in the program at the pair of instructions. The pairs of instructions that are to be executed atomically can be identified in different manners, such as by executing a program multiple times and using the results of those executions to automatically identify the pairs of instructions.

  12. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  13. HyDe: a Python Package for Genome-Scale Hybridization Detection.

    PubMed

    Blischak, Paul D; Chifman, Julia; Wolfe, Andrea D; Kubatko, Laura S

    2018-03-19

    The analysis of hybridization and gene flow among closely related taxa is a common goal for researchers studying speciation and phylogeography. Many methods for hybridization detection use simple site pattern frequencies from observed genomic data and compare them to null models that predict an absence of gene flow. The theory underlying the detection of hybridization using these site pattern probabilities exploits the relationship between the coalescent process for gene trees within population trees and the process of mutation along the branches of the gene trees. For certain models, site patterns are predicted to occur in equal frequency (i.e., their difference is 0), producing a set of functions called phylogenetic invariants. In this paper we introduce HyDe, a software package for detecting hybridization using phylogenetic invariants arising under the coalescent model with hybridization. HyDe is written in Python, and can be used interactively or through the command line using pre-packaged scripts. We demonstrate the use of HyDe on simulated data, as well as on two empirical data sets from the literature. We focus in particular on identifying individual hybrids within population samples and on distinguishing between hybrid speciation and gene flow. HyDe is freely available as an open source Python package under the GNU GPL v3 on both GitHub (https://github.com/pblischak/HyDe) and the Python Package Index (PyPI: https://pypi.python.org/pypi/phyde).

  14. Using goal- and grip-related information for understanding the correctness of other's actions: an ERP study.

    PubMed

    van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T

    2012-01-01

    Detecting errors in other's actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other's actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips.

  15. Virtex-5QV Self Scrubber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojahn, Christopher K.

    2015-10-20

    This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.

  16. Structure and structure-preserving algorithms for plasma physics

    NASA Astrophysics Data System (ADS)

    Morrison, P. J.

    2016-10-01

    Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.

  17. HMI Data Driven Magnetohydrodynamic Model Predicted Active Region Photospheric Heating Rates: Their Scale Invariant, Flare Like Power Law Distributions, and Their Possible Association With Flares

    NASA Technical Reports Server (NTRS)

    Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.

    2017-01-01

    A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model pre- dicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic & Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.

  18. HMI Data Driven Magnetohydrodynamic Model Predicted Active Region Photospheric Heating Rates: Their Scale Invariant, Flare Like Power Law Distributions, and Their Possible Association With Flares

    NASA Technical Reports Server (NTRS)

    Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.

    2017-01-01

    A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model predicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.

  19. Convergent and invariant object representations for sight, sound, and touch.

    PubMed

    Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T

    2015-09-01

    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. © 2015 Wiley Periodicals, Inc.

  20. Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David J; Mueller, Frank; Engelmann, Christian

    Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less

Top