Sample records for weakly stable algorithm

  1. Multi-dimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2015-11-01

    We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).

  2. Higher order temporal finite element methods through mixed formalisms.

    PubMed

    Kim, Jinkyu

    2014-01-01

    The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.

  3. A globally well-posed finite element algorithm for aerodynamics applications

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.; Baker, A. J.

    1991-01-01

    A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.

  4. Performance Optimization Design for a High-Speed Weak FBG Interrogation System Based on DFB Laser.

    PubMed

    Yao, Yiqiang; Li, Zhengying; Wang, Yiming; Liu, Siqi; Dai, Yutang; Gong, Jianmin; Wang, Lixin

    2017-06-22

    A performance optimization design for a high-speed fiber Bragg grating (FBG) interrogation system based on a high-speed distributed feedback (DFB) swept laser is proposed. A time-division-multiplexing sensor network with identical weak FBGs is constituted to realize high-capacity sensing. In order to further improve the multiplexing capacity, a waveform repairing algorithm is designed to extend the dynamic demodulation range of FBG sensors. It is based on the fact that the spectrum of an FBG keeps stable over a long period of time. Compared with the pre-collected spectra, the distorted spectra waveform are identified and repaired. Experimental results show that all the identical weak FBGs are distinguished and demodulated at the speed of 100 kHz with a linearity of above 0.99, and the range of dynamic demodulation is extended by 40%.

  5. Performance Optimization Design for a High-Speed Weak FBG Interrogation System Based on DFB Laser

    PubMed Central

    Yao, Yiqiang; Li, Zhengying; Wang, Yiming; Liu, Siqi; Dai, Yutang; Gong, Jianmin; Wang, Lixin

    2017-01-01

    A performance optimization design for a high-speed fiber Bragg grating (FBG) interrogation system based on a high-speed distributed feedback (DFB) swept laser is proposed. A time-division-multiplexing sensor network with identical weak FBGs is constituted to realize high-capacity sensing. In order to further improve the multiplexing capacity, a waveform repairing algorithm is designed to extend the dynamic demodulation range of FBG sensors. It is based on the fact that the spectrum of an FBG keeps stable over a long period of time. Compared with the pre-collected spectra, the distorted spectra waveform are identified and repaired. Experimental results show that all the identical weak FBGs are distinguished and demodulated at the speed of 100 kHz with a linearity of above 0.99, and the range of dynamic demodulation is extended by 40%. PMID:28640187

  6. From inverse problems to learning: a Statistical Mechanics approach

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Saglietti, Luca; Zecchina, Riccardo

    2018-01-01

    We present a brief introduction to the statistical mechanics approaches for the study of inverse problems in data science. We then provide concrete new results on inferring couplings from sampled configurations in systems characterized by an extensive number of stable attractors in the low temperature regime. We also show how these result are connected to the problem of learning with realistic weak signals in computational neuroscience. Our techniques and algorithms rely on advanced mean-field methods developed in the context of disordered systems.

  7. TLBO based Voltage Stable Environment Friendly Economic Dispatch Considering Real and Reactive Power Constraints

    NASA Astrophysics Data System (ADS)

    Verma, H. K.; Mafidar, P.

    2013-09-01

    In view of growing concern towards environment, power system engineers are forced to generate quality green energy. Hence the economic dispatch (ED) aims at the power generation to meet the load demand at minimum fuel cost with environmental and voltage constraints along with essential constraints on real and reactive power. The emission control which reduces the negative impact on environment is achieved by including the additional constraints in ED problem. Presently, the power system mostly operates near its stability limits, therefore with increased demand the system faces voltage problem. The bus voltages are brought within limit in the present work by placement of static var compensator (SVC) at weak bus which is identified from bus participation factor. The optimal size of SVC is determined by univariate search method. This paper presents the use of Teaching Learning based Optimization (TLBO) algorithm for voltage stable environment friendly ED problem with real and reactive power constraints. The computational effectiveness of TLBO is established through test results over particle swarm optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms for the ED problem.

  8. Quantum lattice representations for vector solitons in external potentials

    NASA Astrophysics Data System (ADS)

    Vahala, George; Vahala, Linda; Yepez, Jeffrey

    2006-03-01

    A quantum lattice algorithm is developed to examine the effect of an external potential well on exactly integrable vector Manakov solitons. It is found that the exact solutions to the coupled nonlinear Schrodinger equations act like quasi-solitons in weak potentials, leading to mode-locking, trapping and untrapping. Stronger potential wells will lead to the emission of radiation modes from the quasi-soliton initial conditions. If the external potential is applied to that particular mode polarization, then the radiation will be trapped within the potential well. The algorithm developed leads to a finite difference scheme that is unconditionally stable. The Manakov system in an external potential is very closely related to the Gross-Pitaevskii equation for the ground state wave functions of a coupled BEC state at T=0 K.

  9. RBoost: Label Noise-Robust Boosting Algorithm Based on a Nonconvex Loss Function and the Numerically Stable Base Learners.

    PubMed

    Miao, Qiguang; Cao, Ying; Xia, Ge; Gong, Maoguo; Liu, Jiachen; Song, Jianfeng

    2016-11-01

    AdaBoost has attracted much attention in the machine learning community because of its excellent performance in combining weak classifiers into strong classifiers. However, AdaBoost tends to overfit to the noisy data in many applications. Accordingly, improving the antinoise ability of AdaBoost plays an important role in many applications. The sensitiveness to the noisy data of AdaBoost stems from the exponential loss function, which puts unrestricted penalties to the misclassified samples with very large margins. In this paper, we propose two boosting algorithms, referred to as RBoost1 and RBoost2, which are more robust to the noisy data compared with AdaBoost. RBoost1 and RBoost2 optimize a nonconvex loss function of the classification margin. Because the penalties to the misclassified samples are restricted to an amount less than one, RBoost1 and RBoost2 do not overfocus on the samples that are always misclassified by the previous base learners. Besides the loss function, at each boosting iteration, RBoost1 and RBoost2 use numerically stable ways to compute the base learners. These two improvements contribute to the robustness of the proposed algorithms to the noisy training and testing samples. Experimental results on the synthetic Gaussian data set, the UCI data sets, and a real malware behavior data set illustrate that the proposed RBoost1 and RBoost2 algorithms perform better when the training data sets contain noisy data.

  10. Fast and stable algorithms for computing the principal square root of a complex matrix

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.

    1987-01-01

    This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.

  11. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  12. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  13. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  14. A fully implicit finite element method for bidomain models of cardiac electromechanics

    PubMed Central

    Dal, Hüsnü; Göktepe, Serdar; Kaliske, Michael; Kuhl, Ellen

    2012-01-01

    We propose a novel, monolithic, and unconditionally stable finite element algorithm for the bidomain-based approach to cardiac electromechanics. We introduce the transmembrane potential, the extracellular potential, and the displacement field as independent variables, and extend the common two-field bidomain formulation of electrophysiology to a three-field formulation of electromechanics. The intrinsic coupling arises from both excitation-induced contraction of cardiac cells and the deformation-induced generation of intra-cellular currents. The coupled reaction-diffusion equations of the electrical problem and the momentum balance of the mechanical problem are recast into their weak forms through a conventional isoparametric Galerkin approach. As a novel aspect, we propose a monolithic approach to solve the governing equations of excitation-contraction coupling in a fully coupled, implicit sense. We demonstrate the consistent linearization of the resulting set of non-linear residual equations. To assess the algorithmic performance, we illustrate characteristic features by means of representative three-dimensional initial-boundary value problems. The proposed algorithm may open new avenues to patient specific therapy design by circumventing stability and convergence issues inherent to conventional staggered solution schemes. PMID:23175588

  15. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  16. Application of the stochastic resonance algorithm to the simultaneous quantitative determination of multiple weak peaks of ultra-performance liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Deng, Haishan; Shang, Erxin; Xiang, Bingren; Xie, Shaofei; Tang, Yuping; Duan, Jin-ao; Zhan, Ying; Chi, Yumei; Tan, Defei

    2011-03-15

    The stochastic resonance algorithm (SRA) has been developed as a potential tool for amplifying and determining weak chromatographic peaks in recent years. However, the conventional SRA cannot be applied directly to ultra-performance liquid chromatography/time-of-flight mass spectrometry (UPLC/TOFMS). The obstacle lies in the fact that the narrow peaks generated by UPLC contain high-frequency components which fall beyond the restrictions of the theory of stochastic resonance. Although there already exists an algorithm that allows a high-frequency weak signal to be detected, the sampling frequency of TOFMS is not fast enough to meet the requirement of the algorithm. Another problem is the depression of the weak peak of the compound with low concentration or weak detection response, which prevents the simultaneous determination of multi-component UPLC/TOFMS peaks. In order to lower the frequencies of the peaks, an interpolation and re-scaling frequency stochastic resonance (IRSR) is proposed, which re-scales the peak frequencies via linear interpolating sample points numerically. The re-scaled UPLC/TOFMS peaks could then be amplified significantly. By introducing an external energy field upon the UPLC/TOFMS signals, the method of energy gain was developed to simultaneously amplify and determine weak peaks from multi-components. Subsequently, a multi-component stochastic resonance algorithm was constructed for the simultaneous quantitative determination of multiple weak UPLC/TOFMS peaks based on the two methods. The optimization of parameters was discussed in detail with simulated data sets, and the applicability of the algorithm was evaluated by quantitative analysis of three alkaloids in human plasma using UPLC/TOFMS. The new algorithm behaved well in the improvement of signal-to-noise (S/N) compared to several normally used peak enhancement methods, including the Savitzky-Golay filter, Whittaker-Eilers smoother and matched filtration. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  19. SMI adaptive antenna arrays for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.

    1987-01-01

    The performance of adaptive antenna arrays is studied when a sample matrix inversion (SMI) algorithm is used to control array weights. It is shown that conventional SMI adaptive antennas, like other adaptive antennas, are unable to suppress weak interfering signals (below thermal noise) encountered in broadcasting satellite communication systems. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is modified such that the effect of thermal noise on the weights of the adaptive array is reduced. Thus, the weights are dictated by relatively weak coherent signals. It is shown that the modified algorithm provides the desired interference protection. The use of defocused feeds as auxiliary elements of an SMI adaptive array is also discussed.

  20. Evaluation of HIV testing algorithms in Ethiopia: the role of the tie-breaker algorithm and weakly reacting test lines in contributing to a high rate of false positive HIV diagnoses.

    PubMed

    Shanks, Leslie; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Pirou, Erwan; Ritmeijer, Koert; Masiga, Johnson; Abebe, Almaz

    2015-02-03

    In Ethiopia a tiebreaker algorithm using 3 rapid diagnostic tests (RDTs) in series is used to diagnose HIV. Discordant results between the first 2 RDTs are resolved by a third 'tiebreaker' RDT. Médecins Sans Frontières uses an alternate serial algorithm of 2 RDTs followed by a confirmation test for all double positive RDT results. The primary objective was to compare the performance of the tiebreaker algorithm with a serial algorithm, and to evaluate the addition of a confirmation test to both algorithms. A secondary objective looked at the positive predictive value (PPV) of weakly reactive test lines. The study was conducted in two HIV testing sites in Ethiopia. Study participants were recruited sequentially until 200 positive samples were reached. Each sample was re-tested in the laboratory on the 3 RDTs and on a simple to use confirmation test, the Orgenics Immunocomb Combfirm® (OIC). The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. 2620 subjects were included with a HIV prevalence of 7.7%. Each of the 3 RDTs had an individual specificity of at least 99%. The serial algorithm with 2 RDTs had a single false positive result (1 out of 204) to give a PPV of 99.5% (95% CI 97.3%-100%). The tiebreaker algorithm resulted in 16 false positive results (PPV 92.7%, 95% CI: 88.4%-95.8%). Adding the OIC confirmation test to either algorithm eliminated the false positives. All the false positives had at least one weakly reactive test line in the algorithm. The PPV of weakly reacting RDTs was significantly lower than those with strongly positive test lines. The risk of false positive HIV diagnosis in a tiebreaker algorithm is significant. We recommend abandoning the tie-breaker algorithm in favour of WHO recommended serial or parallel algorithms, interpreting weakly reactive test lines as indeterminate results requiring further testing except in the setting of blood transfusion, and most importantly, adding a confirmation test to the RDT algorithm. It is now time to focus research efforts on how best to translate this knowledge into practice at the field level. Clinical Trial registration #: NCT01716299.

  1. Stable isotope labelling methods in mass spectrometry-based quantitative proteomics.

    PubMed

    Chahrour, Osama; Cobice, Diego; Malone, John

    2015-09-10

    Mass-spectrometry based proteomics has evolved as a promising technology over the last decade and is undergoing a dramatic development in a number of different areas, such as; mass spectrometric instrumentation, peptide identification algorithms and bioinformatic computational data analysis. The improved methodology allows quantitative measurement of relative or absolute protein amounts, which is essential for gaining insights into their functions and dynamics in biological systems. Several different strategies involving stable isotopes label (ICAT, ICPL, IDBEST, iTRAQ, TMT, IPTL, SILAC), label-free statistical assessment approaches (MRM, SWATH) and absolute quantification methods (AQUA) are possible, each having specific strengths and weaknesses. Inductively coupled plasma mass spectrometry (ICP-MS), which is still widely recognised as elemental detector, has recently emerged as a complementary technique to the previous methods. The new application area for ICP-MS is targeting the fast growing field of proteomics related research, allowing absolute protein quantification using suitable elemental based tags. This document describes the different stable isotope labelling methods which incorporate metabolic labelling in live cells, ICP-MS based detection and post-harvest chemical label tagging for protein quantification, in addition to summarising their pros and cons. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Exact simulation of max-stable processes.

    PubMed

    Dombry, Clément; Engelke, Sebastian; Oesting, Marco

    2016-06-01

    Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.

  3. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    PubMed

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  4. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  5. Structural Stability of Mathematical Models of National Economy

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar A.; Sultanov, Bahyt T.; Borovskiy, Yuriy V.; Adilov, Zheksenbek M.; Ashimov, Askar A.

    2011-12-01

    In the paper we test robustness of particular dynamic systems in a compact regions of a plane and a weak structural stability of one dynamic system of high order in a compact region of its phase space. The test was carried out based on the fundamental theory of dynamical systems on a plane and based on the conditions for weak structural stability of high order dynamic systems. A numerical algorithm for testing the weak structural stability of high order dynamic systems has been proposed. Based on this algorithm we assess the weak structural stability of one computable general equilibrium model.

  6. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  7. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  8. Early Warning Signals for Regime Transition in the Stable Boundary Layer: A Model Study

    NASA Astrophysics Data System (ADS)

    van Hooijdonk, I. G. S.; Moene, A. F.; Scheffer, M.; Clercx, H. J. H.; van de Wiel, B. J. H.

    2017-02-01

    The evening transition is investigated in an idealized model for the nocturnal boundary layer. From earlier studies it is known that the nocturnal boundary layer may manifest itself in two distinct regimes, depending on the ambient synoptic conditions: strong-wind or overcast conditions typically lead to weakly stable, turbulent nights; clear-sky and weak-wind conditions, on the other hand, lead to very stable, weakly turbulent conditions. Previously, the dynamical behaviour near the transition between these regimes was investigated in an idealized setting, relying on Monin-Obukhov (MO) similarity to describe turbulent transport. Here, we investigate a similar set-up, using direct numerical simulation; in contrast to MO-based models, this type of simulation does not need to rely on turbulence closure assumptions. We show that previous predictions are verified, but now independent of turbulence parametrizations. Also, it appears that a regime shift to the very stable state is signaled in advance by specific changes in the dynamics of the turbulent boundary layer. Here, we show how these changes may be used to infer a quantitative estimate of the transition point from the weakly stable boundary layer to the very stable boundary layer. In addition, it is shown that the idealized, nocturnal boundary-layer system shares important similarities with generic non-linear dynamical systems that exhibit critical transitions. Therefore, the presence of other, generic early warning signals is tested as well. Indeed, indications are found that such signals are present in stably stratified turbulent flows.

  9. A Taylor weak-statement algorithm for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  10. An EEG blind source separation algorithm based on a weak exclusion principle.

    PubMed

    Lan Ma; Blu, Thierry; Wang, William S-Y

    2016-08-01

    The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.

  11. Structural Evolution of a Warm Frontal Precipitation Band During GCPEx

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Naeger, Aaron; Molthan, Andrew; Nesbitt, Stephen

    2015-01-01

    A warm frontal precipitation band developed over a few hours 50-100 km to the north of a surface warm front. The 3-km WRF was able to realistically simulate band development, although the model is somewhat too weak. Band genesis was associated with weak frontogenesis (deformation) in the presence of weak potential and conditional instability feeding into the band region, while it was closer to moist neutral within the band. As the band matured, frontogenesis increased, while the stability gradually increased in the banding region. Cloud top generating cells were prevalent, but not in WRF (too stable). The band decayed as the stability increased upstream and the frontogenesis (deformation) with the warm front weakened. The WRF may have been too weak and short-lived with the band because too stable and forcing too weak (some micro issues as well).

  12. Quantitative phase retrieval with arbitrary pupil and illumination

    DOE PAGES

    Claus, Rene A.; Naulleau, Patrick P.; Neureuther, Andrew R.; ...

    2015-10-02

    We present a general algorithm for combining measurements taken under various illumination and imaging conditions to quantitatively extract the amplitude and phase of an object wave. The algorithm uses the weak object transfer function, which incorporates arbitrary pupil functions and partially coherent illumination. The approach is extended beyond the weak object regime using an iterative algorithm. Finally, we demonstrate the method on measurements of Extreme Ultraviolet Lithography (EUV) multilayer mask defects taken in an EUV zone plate microscope with both a standard zone plate lens and a zone plate implementing Zernike phase contrast.

  13. HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual

    DTIC Science & Technology

    1991-05-01

    algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized

  14. A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2016-03-01

    Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.

  15. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  16. [A new method of distinguishing weak and overlapping signals of proton magnetic resonance spectroscopy].

    PubMed

    Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong

    2012-12-01

    In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.

  17. A numerically-stable algorithm for calibrating single six-ports for national microwave reflectometry

    NASA Astrophysics Data System (ADS)

    Hodgetts, T. E.

    1990-11-01

    A full description and analysis of the numerically stable algorithm currently used for calibrating single six ports or multi states for national microwave reflectometry, employing as standards four one port devices having known voltage reflection coefficients, is given.

  18. Phase behaviour and structure of stable complexes of oppositely charged polyelectrolytes

    NASA Astrophysics Data System (ADS)

    Mengarelli, V.; Auvray, L.; Zeghal, M.

    2009-03-01

    We study the formation and structure of stable electrostatic complexes between oppositely charged polyelectrolytes, a long polymethacrylic acid and a shorter polyethylenimine, at low pH, where the polyacid is weakly charged. We explore the phase diagram as a function of the charge and concentration ratio of the constituents. In agreement with theory, turbidity and ζ potential measurements show two distinct regimes of weak and strong complexation, which appear successively as the pH is increased and are separated by a well-defined limit. Weak complexes observed by neutron scattering and contrast matching have an open, non-compact structure, while strong complexes are condensed.

  19. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    NASA Astrophysics Data System (ADS)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  20. Detection and characterization of nonspecific, sparsely-populated binding modes in the early stages of complexation

    PubMed Central

    Cardone, A.; Bornstein, A.; Pant, H. C.; Brady, M.; Sriram, R.; Hassan, S. A.

    2015-01-01

    A method is proposed to study protein-ligand binding in a system governed by specific and non-specific interactions. Strong associations lead to narrow distributions in the proteins configuration space; weak and ultra-weak associations lead instead to broader distributions, a manifestation of non-specific, sparsely-populated binding modes with multiple interfaces. The method is based on the notion that a discrete set of preferential first-encounter modes are metastable states from which stable (pre-relaxation) complexes at equilibrium evolve. The method can be used to explore alternative pathways of complexation with statistical significance and can be integrated into a general algorithm to study protein interaction networks. The method is applied to a peptide-protein complex. The peptide adopts several low-population conformers and binds in a variety of modes with a broad range of affinities. The system is thus well suited to analyze general features of binding, including conformational selection, multiplicity of binding modes, and nonspecific interactions, and to illustrate how the method can be applied to study these problems systematically. The equilibrium distributions can be used to generate biasing functions for simulations of multiprotein systems from which bulk thermodynamic quantities can be calculated. PMID:25782918

  1. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  2. Bivariate spline solution of time dependent nonlinear PDE for a population density over irregular domains.

    PubMed

    Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George

    2015-12-01

    We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Turbulence and pollutant transport in urban street canyons under stable stratification: a large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Li, X.

    2014-12-01

    Thermal stratification of the atmospheric surface layer has strong impact on the land-atmosphere exchange of turbulent, heat, and pollutant fluxes. Few studies have been carried out for the interaction of the weakly to moderately stable stratified atmosphere and the urban canopy. This study performs a large-eddy simulation of a modeled street canyon within a weakly to moderately stable atmosphere boundary layer. To better resolve the smaller eddy size resulted from the stable stratification, a higher spatial and temporal resolution is used. The detailed flow structure and turbulence inside the street canyon are analyzed. The relationship of pollutant dispersion and Richardson number of the atmosphere is investigated. Differences between these characteristics and those under neutral and unstable atmosphere boundary layer are emphasized.

  4. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  5. Research of high power and stable laser in portable Raman spectrometer based on SHINERS technology

    NASA Astrophysics Data System (ADS)

    Cui, Yongsheng; Yin, Yu; Wu, Yulin; Ni, Xuxiang; Zhang, Xiuda; Yan, Huimin

    2013-08-01

    The intensity of Raman light is very weak, which is only from 10-12 to 10-6 of the incident light. In order to obtain the required sensitivity, the traditional Raman spectrometer tends to be heavy weight and large volume, so it is often used as indoor test device. Based on the Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) method, Raman optical spectrum signal can be enhanced significantly and the portable Raman spectrometer combined with SHINERS method will be widely used in various fields. The laser source must be stable enough and able to output monochromatic narrow band laser with stable power in the portable Raman spectrometer based on the SHINERS method. When the laser is working, the change of temperature can induce wavelength drift, thus the power stability of excitation light will be affected, so we need to strictly control the working temperature of the laser, In order to ensure the stability of laser power and output current, this paper adopts the WLD3343 laser constant current driver chip of Wavelength Electronics company and MCU P89LPC935 to drive LML - 785.0 BF - XX laser diode(LD). Using this scheme, the Raman spectrometer can be small in size and the drive current can be constant. At the same time, we can achieve functions such as slow start, over-current protection, over-voltage protection, etc. Continuous adjustable output can be realized under control, and the requirement of high power output can be satisfied. Max1968 chip is adopted to realize the accurate control of the laser's temperature. In this way, it can meet the demand of miniaturization. In term of temperature control, integral truncation effect of traditional PID algorithm is big, which is easy to cause static difference. Each output of incremental PID algorithm has nothing to do with the current position, and we can control the output coefficients to avoid full dose output and immoderate adjustment, then the speed of balance will be improved observably. Variable integral incremental digital PID algorithm is used in the TEC temperature control system. The experimental results show that comparing with other schemes, the output power of laser in our scheme is more stable and reliable, moreover the peak value is bigger, and the temperature can be precisely controlled in +/-0.1°C, then the volume of the device is smaller. Using this laser equipment, the ideal Raman spectra of materials can be obtained combined with SHINERS technology and spectrometer equipment.

  6. Imaging through turbulence using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-09-01

    Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.

  7. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  8. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

    PubMed Central

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385

  9. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

    PubMed

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

  10. On the stability analysis of sharply stratified shear flows

    NASA Astrophysics Data System (ADS)

    Churilov, Semyon

    2018-05-01

    When the stability of a sharply stratified shear flow is studied, the density profile is usually taken stepwise and a weak stratification between pycnoclines is neglected. As a consequence, in the instability domain of the flow two-sided neutral curves appear such that the waves corresponding to them are neutrally stable, whereas the neighboring waves on either side of the curve are unstable, in contrast with the classical result of Miles (J Fluid Mech 16:209-227, 1963) who proved that in stratified flows unstable oscillations can be only on one side of the neutral curve. In the paper, the contradiction is resolved and changes in the flow stability pattern under transition from a model stepwise to a continuous density profile are analyzed. On this basis, a simple self-consistent algorithm is proposed for studying the stability of sharply stratified shear flows with a continuous density variation and an arbitrary monotonic velocity profile without inflection points. Because our calculations and the algorithm are both based on the method of stability analysis (Churilov J Fluid Mech 539:25-55, 2005; ibid, 617, 301-326, 2008), which differs essentially from usually used, the paper starts with a brief review of the method and results obtained with it.

  11. Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control

    NASA Technical Reports Server (NTRS)

    Bernstein, Daniel S.; Zilberstein, Shlomo

    2003-01-01

    Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.

  12. Global navigation satellite system receiver for weak signals under all dynamic conditions

    NASA Astrophysics Data System (ADS)

    Ziedan, Nesreen Ibrahim

    The ability of the Global Navigation Satellite System (GNSS) receiver to work under weak signal and various dynamic conditions is required in some applications. For example, to provide a positioning capability in wireless devices, or orbit determination of Geostationary and high Earth orbit satellites. This dissertation develops Global Positioning System (GPS) receiver algorithms for such applications. Fifteen algorithms are developed for the GPS C/A signal. They cover all the receiver main functions, which include acquisition, fine acquisition, bit synchronization, code and carrier tracking, and navigation message decoding. They are integrated together, and they can be used in any software GPS receiver. They also can be modified to fit any other GPS or GNSS signals. The algorithms have new capabilities. The processing and memory requirements are considered in the design to allow the algorithms to fit the limited resources of some applications; they do not require any assisting information. Weak signals can be acquired in the presence of strong interfering signals and under high dynamic conditions. The fine acquisition, bit synchronization, and tracking algorithms are based on the Viterbi algorithm and Extended Kalman filter approaches. The tracking algorithms capabilities increase the time to lose lock. They have the ability to adaptively change the integration length and the code delay separation. More than one code delay separation can be used in the same time. Large tracking errors can be detected and then corrected by a re-initialization and an acquisition-like algorithms. Detecting the navigation message is needed to increase the coherent integration; decoding it is needed to calculate the navigation solution. The decoding algorithm utilizes the message structure to enable its decoding for signals with high Bit Error Rate. The algorithms are demonstrated using simulated GPS C/A code signals, and TCXO clocks. The results have shown the algorithms ability to reliably work with 15 dB-Hz signals and acceleration over 6 g.

  13. An experimental SMI adaptive antenna array for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Gupta, I. J.

    1989-01-01

    A modified sample matrix inversion (SMI) algorithm designed to increase the suppression of weak interference is implemented on an existing experimental array system. The algorithm itself is fully described as are a number of issues concerning its implementation and evaluation, such as sample scaling, snapshot formation, weight normalization, power calculation, and system calibration. Several experiments show that the steady state performance (i.e., many snapshots are used to calculate the array weights) of the experimental system compares favorably with its theoretical performance. It is demonstrated that standard SMI does not yield adequate suppression of weak interference. Modified SMI is then used to experimentally increase this suppression by as much as 13dB.

  14. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  15. Formation flying design and applications in weak stability boundary regions.

    PubMed

    Folta, David

    2004-05-01

    Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.

  16. Characteristics of the turbulence in the stable boundary layer over complex terrain of the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Liang, J.; Zhang, L.; Yuan, G.

    2017-12-01

    Accurate determination of surface turbulent fluxes in a stable boundary layer is of great practical importance in weather prediction and climate simulations, as well as applications related to air pollution. To gain an insight into the characteristics of turbulence in a stable boundary layer over the complex terrain of the Loess Plateau, we analyzed the data from the Semi-Arid Climate and Environment Observatory of Lanzhou University (SACOL). We proposed a method to identify and efficiently isolate nonstationary motions from turbulence series, and examined the characteristics of nonstationary motions (nonstationary motions refer to gusty events on a greater scale than local shear-generated turbulence). The occurrence frequency of nonstationary motions was found to depend on the mean flow, being more frequent in weak wind conditions and vanishing when the wind speed, U, was greater than 3.0 m s-1. When U exceeded the threshold value of 1.0 m s-1 for the gradient Richardson number Ri ≤ 0.3 and 1.5 m s-1 for Ri > 0.3, local shear-generated turbulence depended systematically on U with an average rate of 0.05 U. However, for the weak wind condition, neither the mean wind speed nor the stability was an important factor for local turbulence. Under the weak wind stable condition, affected by topography-induced nonstationary motions, the local turbulence was anisotropic with a strong horizontal fluctuation and a weak vertical fluctuation, resulting in weakened heat mixing in the vertical direction and stronger un-closure of energy. These findings accessed the validity of similarity theory in the stable boundary layer over complex terrain, and revealed one reason for the stronger un-closure of energy in the night.

  17. Applications of fractional lower order S transform time frequency filtering algorithm to machine fault diagnosis

    PubMed Central

    Wang, Haibin; Zha, Daifeng; Li, Peng; Xie, Huicheng; Mao, Lili

    2017-01-01

    Stockwell transform(ST) time-frequency representation(ST-TFR) is a time frequency analysis method which combines short time Fourier transform with wavelet transform, and ST time frequency filtering(ST-TFF) method which takes advantage of time-frequency localized spectra can separate the signals from Gaussian noise. The ST-TFR and ST-TFF methods are used to analyze the fault signals, which is reasonable and effective in general Gaussian noise cases. However, it is proved that the mechanical bearing fault signal belongs to Alpha(α) stable distribution process(1 < α < 2) in this paper, even the noise also is α stable distribution in some special cases. The performance of ST-TFR method will degrade under α stable distribution noise environment, following the ST-TFF method fail. Hence, a new fractional lower order ST time frequency representation(FLOST-TFR) method employing fractional lower order moment and ST and inverse FLOST(IFLOST) are proposed in this paper. A new FLOST time frequency filtering(FLOST-TFF) algorithm based on FLOST-TFR method and IFLOST is also proposed, whose simplified method is presented in this paper. The discrete implementation of FLOST-TFF algorithm is deduced, and relevant steps are summarized. Simulation results demonstrate that FLOST-TFR algorithm is obviously better than the existing ST-TFR algorithm under α stable distribution noise, which can work better under Gaussian noise environment, and is robust. The FLOST-TFF method can effectively filter out α stable distribution noise, and restore the original signal. The performance of FLOST-TFF algorithm is better than the ST-TFF method, employing which mixed MSEs are smaller when α and generalized signal noise ratio(GSNR) change. Finally, the FLOST-TFR and FLOST-TFF methods are applied to analyze the outer race fault signal and extract their fault features under α stable distribution noise, where excellent performances can be shown. PMID:28406916

  18. An algorithm for engineering regime shifts in one-dimensional dynamical systems

    NASA Astrophysics Data System (ADS)

    Tan, James P. L.

    2018-01-01

    Regime shifts are discontinuous transitions between stable attractors hosting a system. They can occur as a result of a loss of stability in an attractor as a bifurcation is approached. In this work, we consider one-dimensional dynamical systems where attractors are stable equilibrium points. Relying on critical slowing down signals related to the stability of an equilibrium point, we present an algorithm for engineering regime shifts such that a system may escape an undesirable attractor into a desirable one. We test the algorithm on synthetic data from a one-dimensional dynamical system with a multitude of stable equilibrium points and also on a model of the population dynamics of spruce budworms in a forest. The algorithm and other ideas discussed here contribute to an important part of the literature on exercising greater control over the sometimes unpredictable nature of nonlinear systems.

  19. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  20. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  1. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  2. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  3. Local Characteristics of the Nocturnal Boundary Layer in Response to External Pressure Forcing

    NASA Astrophysics Data System (ADS)

    van der Linden, Steven; Baas, Peter; van Hooft, Antoon; van Hooijdonk, Ivo; Bosveld, Fred; van de Wiel, Bas

    2017-04-01

    Geostrophic wind speed data, derived from pressure observations, are used in combination with tower measurements to investigate the nocturnal stable boundary layer at Cabauw, The Netherlands. Since the geostrophic wind speed is not directly influenced by local nocturnal stability, it may be regarded as an external forcing parameter of the nocturnal stable boundary layer. This is in contrast to local parameters such as in situ wind speed, the Monin-Obukhov stability parameter (z/L) or the local Richardson number. To characterize the stable boundary layer, ensemble averages of clear-sky nights with similar geostrophic wind speed are formed. In this manner, the mean dynamical behavior of near-surface turbulent characteristics, and composite profiles of wind and temperature is systematically investigated. We find that the classification results in a gradual ordering of the diagnosed variables in terms of the geostrophic wind speed. In an ensemble sense the transition from the weakly stable to very stable boundary layer is more gradual than expected. Interestingly, for very weak geostrophic winds turbulent activity is found to be negligibly small while the resulting boundary cooling stays finite. Realistic numerical simulations for those cases should therefore have a a solid description of other thermodynamic processes such as soil heat conduction and radiative transfer. This prerequisite poses a challenge for Large-Eddy Simulations of weak wind nocturnal boundary layers.

  4. Hydrodynamic control of microphytoplankton bloom in a coastal sea

    NASA Astrophysics Data System (ADS)

    Murty, K. Narasimha; Sarma, Nittala S.; Pandi, Sudarsana Rao; Chiranjeevulu, Gundala; Kiran, Rayaprolu; Muralikrishna, R.

    2017-08-01

    The influence of hydrodynamics on phytoplankton bloom occurrence/formation has not been adequately reported. Here, we document diurnal observations in the tropical Bay of Bengal's mid-western shelf region which reveal microphytoplankton cell density maxima in association with neap tide many times more than what could be accounted for by solar insolation and nutrient levels. When in summer, phytoplankton cells were abundant and the cell density of Guinardia delicatula reached critical value by tide caused zonation, aggregation happened to an intense bloom. Mucilaginous exudates from the alga due to heat and silicate stress likely promoted and stable water column and weak winds left undisturbed, the transient bloom. The phytoplankton aggregates have implication as food resource in the benthic region implying higher fishery potential, in carbon dioxide sequestration (carbon burial) and in efforts towards improving remote sensing algorithms for chlorophyll in the coastal region.

  5. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGES

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  6. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  7. Non-negative Matrix Factorization for Self-calibration of Photometric Redshift Scatter in Weak-lensing Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn

    Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negativemore » matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.« less

  8. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C.

    2010-01-01

    This paper presents a novel classification via aggregated regression algorithm – dubbed CAVIAR – and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature – the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain – and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS. PMID:21151847

  9. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE.

    PubMed

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C

    2010-04-14

    This paper presents a novel classification via aggregated regression algorithm - dubbed CAVIAR - and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature - the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain - and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS.

  10. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  11. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  12. A possible mechanism of interleaving at weak baroclinic fronts under stable-stable stratification in the Arctic Basin

    NASA Astrophysics Data System (ADS)

    Kuzmina, Natalia; Izvekova, Yulia N.

    2016-04-01

    Some analytical solutions are found for the problem of three-dimensional instability of a weak geostrophic flow with linear velocity shear taking into account vertical diffusion of buoyancy. The analysis is based on the potential vorticity equation in a long-wave approximation when the horizontal scale of disturbances is taken much larger than the local baroclinic radius Rossby. It is hypothesized that the solutions found can be applied to describe stable and unstable disturbances of planetary scale with respect, especially, to the Arctic basin where weak baroclinic fronts with typical temporal variability period of the order of several years or more are observed and the beta-effect is negligible. Stable (decreasing with time) solutions describe disturbances that, in contrast to the Rossby waves, can propagate both to the west and east depending on the sign of linear shear of geostrophic velocity. The unstable (growing with time) solutions are applied to describe large-scale intrusions at baroclinic fronts under stable-stable thermohaline stratification observed in the upper layer of the Polar Deep Water in the Eurasian basin. The proposed description of intrusive layering can be considered as a possible alternative to the mechanism of interleaving due to the differential mixing (Merryfield, 2002; Kuzmina et al., 2011). References Kuzmina N., Rudels B., Zhurbas V., Stipa T. On the structure and dynamical features of intrusive layering in the Eurasian Basin in the Arctic Ocean. J. Geophys. Res., 2011, 116, C00D11, doi:10.1029/2010JC006920. Merryfield W. J. Intrusions in double-diffusively stable Arctic Waters: Evidence for differential mixing? J. Phys. Oceanogr., 2002, 32, 1452-1439.

  13. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement

    PubMed Central

    Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-01-01

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280

  14. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.

    PubMed

    Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-03-28

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.

  15. Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem.

    PubMed

    Burnecki, Krzysztof; Wylomanska, Agnieszka; Chechkin, Aleksei

    2015-01-01

    In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.

  16. Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem

    PubMed Central

    Chechkin, Aleksei

    2015-01-01

    In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov–Smirnov test. In particular, it helps to distinguish between stable and Student’s t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition. PMID:26698863

  17. New Stable Cu(I) Catalyst Supported on Weakly Acidic Polyacrylate Resin for Green C-N Coupling: Synthesis of N-(Pyridin-4-yl)benzene Amines and N,N-Bis(pyridine-4-yl)benzene Amines.

    PubMed

    Kore, Nitin; Pazdera, Pavel

    2016-12-22

    A method for preparation of a new stable Cu(I) catalyst supported on weakly acidic polyacrylate resin without additional stabilizing ligands is described. A simple and efficient methodology for Ullmann Cu(I) catalyzed C-N cross coupling reactions using this original catalyst is reported. Coupling reactions of 4-chloropyridinium chloride with anilines containing electron donating (EDG) or electron withdrawing (EWG) groups, naphthalen-2-amine and piperazine, respectively, are successfully demonstrated.

  18. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    PubMed

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  19. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    NASA Astrophysics Data System (ADS)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  20. Probes for dark matter physics

    NASA Astrophysics Data System (ADS)

    Khlopov, Maxim Yu.

    The existence of cosmological dark matter is in the bedrock of the modern cosmology. The dark matter is assumed to be nonbaryonic and consists of new stable particles. Weakly Interacting Massive Particle (WIMP) miracle appeals to search for neutral stable weakly interacting particles in underground experiments by their nuclear recoil and at colliders by missing energy and momentum, which they carry out. However, the lack of WIMP effects in their direct underground searches and at colliders can appeal to other forms of dark matter candidates. These candidates may be weakly interacting slim particles, superweakly interacting particles, or composite dark matter, in which new particles are bound. Their existence should lead to cosmological effects that can find probes in the astrophysical data. However, if composite dark matter contains stable electrically charged leptons and quarks bound by ordinary Coulomb interaction in elusive dark atoms, these charged constituents of dark atoms can be the subject of direct experimental test at the colliders. The models, predicting stable particles with charge ‑ 2 without stable particles with charges + 1 and ‑ 1 can avoid severe constraints on anomalous isotopes of light elements and provide solution for the puzzles of dark matter searches. In such models, the excessive ‑ 2 charged particles are bound with primordial helium in O-helium atoms, maintaining specific nuclear-interacting form of the dark matter. The successful development of composite dark matter scenarios appeals for experimental search for doubly charged constituents of dark atoms, making experimental search for exotic stable double charged particles experimentum crucis for dark atoms of composite dark matter.

  1. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  2. Probabilistic cosmological mass mapping from weak lensing shear

    DOE PAGES

    Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...

    2017-04-10

    Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  3. Probabilistic Cosmological Mass Mapping from Weak Lensing Shear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M. D.; Dawson, W. A.; Ng, K. Y.

    2017-04-10

    We infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear or Gaussian-distributedmore » shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  4. On Stable Marriages and Greedy Matchings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manne, Fredrik; Naim, Md; Lerring, Hakon

    2016-12-11

    Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedymore » matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.« less

  5. Weak hydrogen bond topology in 1,1-difluoroethane dimer: A rotational study

    NASA Astrophysics Data System (ADS)

    Chen, Junhua; Zheng, Yang; Wang, Juan; Feng, Gang; Xia, Zhining; Gou, Qian

    2017-09-01

    The rotational spectrum of the 1,1-difluoroethane dimer has been investigated by pulsed-jet Fourier transform microwave spectroscopy. Two most stable isomers have been detected, which are both stabilized by a network of three C—H⋯F—C weak hydrogen bonds: in the most stable isomer, two difluoromethyl C—H groups and one methyl C—H group act as the weak proton donors whilst in the second isomer, two methyl C—H groups and one difluoromethyl C—H group act as the weak proton donors. For the global minimum, the measurements have also been extended to its four 13C isotopologues in natural abundance, allowing a precise, although partial, structural determination. Relative intensity measurements on a set of μa-type transitions allowed estimating the relative population ratio of the two isomers as NI/NII ˜ 6/1 in the pulsed jet, indicating a much larger energy gap between these two isomers than that expected from ab initio calculation, consistent with the result from pseudo-diatomic dissociation energies estimation.

  6. Efficient Learning Algorithms with Limited Information

    ERIC Educational Resources Information Center

    De, Anindya

    2013-01-01

    The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…

  7. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  8. Weak gauge boson radiation in parton showers

    NASA Astrophysics Data System (ADS)

    Christiansen, Jesper R.; Sjöstrand, Torbjörn

    2014-04-01

    The emission of W and Z gauge bosons off quarks is included in a traditional QCD + QED shower. The unitarity of the shower algorithm links the real radiation of the weak gauge bosons to the negative weak virtual corrections. The shower evolution process leads to a competition between QCD, QED and weak radiation, and allows for W and Z boson production inside jets. Various effects on LHC physics are studied, both at low and high transverse momenta, and effects at higher-energy hadron colliders are outlined.

  9. Weak interactions, omnivory and emergent food-web properties.

    PubMed

    Emmerson, Mark; Yearsley, Jon M

    2004-02-22

    Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.

  10. Cryptanalysis of "an improvement over an image encryption method based on total shuffling"

    NASA Astrophysics Data System (ADS)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2015-09-01

    In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.

  11. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Learning Latent Variable and Predictive Models of Dynamical Systems

    DTIC Science & Technology

    2009-10-01

    stable over the full 1000 frame image sequence without significant damping. C. Sam- ples drawn from a least squares synthesized sequences (top), and...LDS stabilizing algorithms, LB-1 and LB-2. Bars at every 20 timesteps denote variance in the results. CG provides the best stable short term predictions...observations. This thesis contributes (1) novel learning algorithms for existing dynamical system models that overcome significant limitations of previous

  13. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.

  14. Root (Botany)

    Treesearch

    Robert R. Ziemer

    1981-01-01

    Plant roots can contribute significantly to the stability of steep slopes. They can anchor through the soil mass into fractures in bedrock, can cross zones of weakness to more stable soil, and can provide interlocking long fibrous binders within a weak soil mass. In deep soil, anchoring to bedrock becomes negligible, and lateral reinforcement predominates

  15. Alternative method of quantum state tomography toward a typical target via a weak-value measurement

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Dai, Hong-Yi; Yang, Le; Zhang, Ming

    2018-03-01

    There is usually a limitation of weak interaction on the application of weak-value measurement. This limitation dominates the performance of the quantum state tomography toward a typical target in the finite and high-dimensional complex-valued superposition of its basis states, especially when the compressive sensing technique is also employed. Here we propose an alternative method of quantum state tomography, presented as a general model, toward such typical target via weak-value measurement to overcome such limitation. In this model the pointer for the weak-value measurement is a qubit, and the target-pointer coupling interaction is no longer needed within the weak interaction limitation, meanwhile this interaction under the compressive sensing can be described with the Taylor series of the unitary evolution operator. The postselection state at the target is the equal superposition of all basis states, and the pointer readouts are gathered under multiple Pauli operator measurements. The reconstructed quantum state is generated from an optimization algorithm of total variation augmented Lagrangian alternating direction algorithm. Furthermore, we demonstrate an example of this general model for the quantum state tomography toward the planar laser-energy distribution and discuss the relations among some parameters at both our general model and the original first-order approximate model for this tomography.

  16. An experimental SMI adaptive antenna array simulator for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald S.; Gupta, Inder J.

    1991-01-01

    An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.

  17. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to themore » respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor subvolume with R{sub rigid} = − 0.60 (p = 0.01) and R{sub deformable} = − 0.46 (p = 0.01–0.20) averaging over all deformable algorithms. For stable-disease subvolumes, the correlations were significant (p < 0.001) for all registration algorithms with R{sub rigid} = − 0.71 and R{sub deformable} = − 0.72. Progressor and stable-disease subvolumes resulting from rigid registration were in excellent agreement (DSI > 70%) for RMS{sub rigid} < 150 HU. However, tumor deformation was observed to have negligible effect on DSI for responder subvolumes with insignificant |R| < 0.26, p > 0.27. Conclusions: This study demonstrated that deformable algorithms cannot be arbitrarily chosen; different deformable algorithms can result in large differences of voxel-based PET image comparison. For low tumor deformation (RMS{sub rigid} < 150 HU), rigid and deformable algorithms yield similar results, suggesting deformable registration is not required for these cases.« less

  18. Discovering protein complexes in protein interaction networks via exploring the weak ties effect

    PubMed Central

    2012-01-01

    Background Studying protein complexes is very important in biological processes since it helps reveal the structure-functionality relationships in biological networks and much attention has been paid to accurately predict protein complexes from the increasing amount of protein-protein interaction (PPI) data. Most of the available algorithms are based on the assumption that dense subgraphs correspond to complexes, failing to take into account the inherence organization within protein complex and the roles of edges. Thus, there is a critical need to investigate the possibility of discovering protein complexes using the topological information hidden in edges. Results To provide an investigation of the roles of edges in PPI networks, we show that the edges connecting less similar vertices in topology are more significant in maintaining the global connectivity, indicating the weak ties phenomenon in PPI networks. We further demonstrate that there is a negative relation between the weak tie strength and the topological similarity. By using the bridges, a reliable virtual network is constructed, in which each maximal clique corresponds to the core of a complex. By this notion, the detection of the protein complexes is transformed into a classic all-clique problem. A novel core-attachment based method is developed, which detects the cores and attachments, respectively. A comprehensive comparison among the existing algorithms and our algorithm has been made by comparing the predicted complexes against benchmark complexes. Conclusions We proved that the weak tie effect exists in the PPI network and demonstrated that the density is insufficient to characterize the topological structure of protein complexes. Furthermore, the experimental results on the yeast PPI network show that the proposed method outperforms the state-of-the-art algorithms. The analysis of detected modules by the present algorithm suggests that most of these modules have well biological significance in context of complexes, suggesting that the roles of edges are critical in discovering protein complexes. PMID:23046740

  19. Efficient and Stable Routing Algorithm Based on User Mobility and Node Density in Urban Vehicular Network.

    PubMed

    Al-Mayouf, Yusor Rafid Bahar; Ismail, Mahamod; Abdullah, Nor Fadzilah; Wahab, Ainuddin Wahid Abdul; Mahdi, Omar Adil; Khan, Suleman; Choo, Kim-Kwang Raymond

    2016-01-01

    Vehicular ad hoc networks (VANETs) are considered an emerging technology in the industrial and educational fields. This technology is essential in the deployment of the intelligent transportation system, which is targeted to improve safety and efficiency of traffic. The implementation of VANETs can be effectively executed by transmitting data among vehicles with the use of multiple hops. However, the intrinsic characteristics of VANETs, such as its dynamic network topology and intermittent connectivity, limit data delivery. One particular challenge of this network is the possibility that the contributing node may only remain in the network for a limited time. Hence, to prevent data loss from that node, the information must reach the destination node via multi-hop routing techniques. An appropriate, efficient, and stable routing algorithm must be developed for various VANET applications to address the issues of dynamic topology and intermittent connectivity. Therefore, this paper proposes a novel routing algorithm called efficient and stable routing algorithm based on user mobility and node density (ESRA-MD). The proposed algorithm can adapt to significant changes that may occur in the urban vehicular environment. This algorithm works by selecting an optimal route on the basis of hop count and link duration for delivering data from source to destination, thereby satisfying various quality of service considerations. The validity of the proposed algorithm is investigated by its comparison with ARP-QD protocol, which works on the mechanism of optimal route finding in VANETs in urban environments. Simulation results reveal that the proposed ESRA-MD algorithm shows remarkable improvement in terms of delivery ratio, delivery delay, and communication overhead.

  20. Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures

    NASA Astrophysics Data System (ADS)

    Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng

    2002-03-01

    The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.

  1. An unconditionally stable staggered algorithm for transient finite element analysis of coupled thermoelastic problems

    NASA Technical Reports Server (NTRS)

    Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.

    1991-01-01

    An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.

  2. Effective search for stable segregation configurations at grain boundaries with data-mining techniques

    NASA Astrophysics Data System (ADS)

    Kiyohara, Shin; Mizoguchi, Teruyasu

    2018-03-01

    Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.

  3. Algorithms for optimization of the transport system in living and artificial cells.

    PubMed

    Melkikh, A V; Sutormina, M I

    2011-06-01

    An optimization of the transport system in a cell has been considered from the viewpoint of the operations research. Algorithms for an optimization of the transport system of a cell in terms of both the efficiency and a weak sensitivity of a cell to environmental changes have been proposed. The switching of various systems of transport is considered as the mechanism of weak sensitivity of a cell to changes in environment. The use of the algorithms for an optimization of a cardiac cell has been considered by way of example. We received theoretically for a cell of a cardiac muscle that at the increase of potassium concentration in the environment switching of transport systems for this ion takes place. This conclusion qualitatively coincides with experiments. The problem of synthesizing an optimal system in an artificial cell has been stated.

  4. Weak hydrogen bond topology in 1,1-difluoroethane dimer: A rotational study.

    PubMed

    Chen, Junhua; Zheng, Yang; Wang, Juan; Feng, Gang; Xia, Zhining; Gou, Qian

    2017-09-07

    The rotational spectrum of the 1,1-difluoroethane dimer has been investigated by pulsed-jet Fourier transform microwave spectroscopy. Two most stable isomers have been detected, which are both stabilized by a network of three C-H⋯F-C weak hydrogen bonds: in the most stable isomer, two difluoromethyl C-H groups and one methyl C-H group act as the weak proton donors whilst in the second isomer, two methyl C-H groups and one difluoromethyl C-H group act as the weak proton donors. For the global minimum, the measurements have also been extended to its four 13 C isotopologues in natural abundance, allowing a precise, although partial, structural determination. Relative intensity measurements on a set of μ a -type transitions allowed estimating the relative population ratio of the two isomers as N I /N II ∼ 6/1 in the pulsed jet, indicating a much larger energy gap between these two isomers than that expected from ab initio calculation, consistent with the result from pseudo-diatomic dissociation energies estimation.

  5. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, P. T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less

  6. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  7. 2D deblending using the multi-scale shaping scheme

    NASA Astrophysics Data System (ADS)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  8. Calculation of the Respiratory Modulation of the Photoplethysmogram (DPOP) Incorporating a Correction for Low Perfusion

    PubMed Central

    Addison, Paul S.; Wang, Rui; McGonigle, Scott J.; Bergese, Sergio D.

    2014-01-01

    DPOP quantifies respiratory modulations in the photoplethysmogram. It has been proposed as a noninvasive surrogate for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. The correlation between DPOP and PPV may degrade due to low perfusion effects. We implemented an automated DPOP algorithm with an optional correction for low perfusion. These two algorithm variants (DPOPa and DPOPb) were tested on data from 20 mechanically ventilated OR patients split into a benign “stable region” subset and a whole record “global set.” Strong correlation was found between DPOP and PPV for both algorithms when applied to the stable data set: R = 0.83/0.85 for DPOPa/DPOPb. However, a marked improvement was found when applying the low perfusion correction to the global data set: R = 0.47/0.73 for DPOPa/DPOPb. Sensitivities, Specificities, and AUCs were 0.86, 0.70, and 0.88 for DPOPa/stable region; 0.89, 0.82, and 0.92 for DPOPb/stable region; 0.81, 0.61, and 0.73 for DPOPa/global region; 0.83, 0.76, and 0.86 for DPOPb/global region. An improvement was found in all results across both data sets when using the DPOPb algorithm. Further, DPOPb showed marked improvements, both in terms of its values, and correlation with PPV, for signals exhibiting low percent modulations. PMID:25177348

  9. Automatic protein structure solution from weak X-ray data

    NASA Astrophysics Data System (ADS)

    Skubák, Pavol; Pannu, Navraj S.

    2013-11-01

    Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.

  10. Improvement of coda phase detectability and reconstruction of global seismic data using frequency-wavenumber methods

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng

    2018-02-01

    Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.

  11. Efficient and Stable Routing Algorithm Based on User Mobility and Node Density in Urban Vehicular Network

    PubMed Central

    Al-Mayouf, Yusor Rafid Bahar; Ismail, Mahamod; Abdullah, Nor Fadzilah; Wahab, Ainuddin Wahid Abdul; Mahdi, Omar Adil; Khan, Suleman; Choo, Kim-Kwang Raymond

    2016-01-01

    Vehicular ad hoc networks (VANETs) are considered an emerging technology in the industrial and educational fields. This technology is essential in the deployment of the intelligent transportation system, which is targeted to improve safety and efficiency of traffic. The implementation of VANETs can be effectively executed by transmitting data among vehicles with the use of multiple hops. However, the intrinsic characteristics of VANETs, such as its dynamic network topology and intermittent connectivity, limit data delivery. One particular challenge of this network is the possibility that the contributing node may only remain in the network for a limited time. Hence, to prevent data loss from that node, the information must reach the destination node via multi-hop routing techniques. An appropriate, efficient, and stable routing algorithm must be developed for various VANET applications to address the issues of dynamic topology and intermittent connectivity. Therefore, this paper proposes a novel routing algorithm called efficient and stable routing algorithm based on user mobility and node density (ESRA-MD). The proposed algorithm can adapt to significant changes that may occur in the urban vehicular environment. This algorithm works by selecting an optimal route on the basis of hop count and link duration for delivering data from source to destination, thereby satisfying various quality of service considerations. The validity of the proposed algorithm is investigated by its comparison with ARP-QD protocol, which works on the mechanism of optimal route finding in VANETs in urban environments. Simulation results reveal that the proposed ESRA-MD algorithm shows remarkable improvement in terms of delivery ratio, delivery delay, and communication overhead. PMID:27855165

  12. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  13. Simulation Based Evaluation of Integrated Adaptive Control and Flight Planning Technologies

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan Forrest; Kaneshige, John T.

    2008-01-01

    The objective of this work is to leverage NASA resources to enable effective evaluation of resilient aircraft technologies through simulation. This includes examining strengths and weaknesses of adaptive controllers, emergency flight planning algorithms, and flight envelope determination algorithms both individually and as an integrated package.

  14. [Tachycardia detection in implantable cardioverter-defibrillators by Sorin/LivaNova : Algorithms, pearls and pitfalls].

    PubMed

    Kolb, Christof; Ocklenburg, Rolf

    2016-09-01

    For physicians involved in the treatment of patients with implantable cardioverter-defibrillators (ICDs) the knowledge of tachycardia detection algorithms is of paramount importance. This knowledge is essential for adequate device selection during de-novo implantation, ICD replacement, and for troubleshooting during follow-up. This review describes tachycardia detection algorithms incorporated in ICDs by Sorin/LivaNova and analyses their strengths and weaknesses.

  15. Array design considerations for exploitation of stable weakly dispersive modal pulses in the deep ocean

    NASA Astrophysics Data System (ADS)

    Udovydchenkov, Ilya A.

    2017-07-01

    Modal pulses are broadband contributions to an acoustic wave field with fixed mode number. Stable weakly dispersive modal pulses (SWDMPs) are special modal pulses that are characterized by weak dispersion and weak scattering-induced broadening and are thus suitable for communications applications. This paper investigates, using numerical simulations, receiver array requirements for recovering information carried by SWDMPs under various signal-to-noise ratio conditions without performing channel equalization. Two groups of weakly dispersive modal pulses are common in typical mid-latitude deep ocean environments: the lowest order modes (typically modes 1-3 at 75 Hz), and intermediate order modes whose waveguide invariant is near-zero (often around mode 20 at 75 Hz). Information loss is quantified by the bit error rate (BER) of a recovered binary phase-coded signal. With fixed receiver depths, low BERs (less than 1%) are achieved at ranges up to 400 km with three hydrophones for mode 1 with 90% probability and with 34 hydrophones for mode 20 with 80% probability. With optimal receiver depths, depending on propagation range, only a few, sometimes only two, hydrophones are often sufficient for low BERs, even with intermediate mode numbers. Full modal resolution is unnecessary to achieve low BERs. Thus, a flexible receiver array of autonomous vehicles can outperform a cabled array.

  16. Measuring the self-similarity exponent in Lévy stable processes of financial time series

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.

    2013-11-01

    Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.

  17. Head-to-Head Comparison of Two Popular Cortical Thickness Extraction Algorithms: A Cross-Sectional and Longitudinal Study

    PubMed Central

    Redolfi, Alberto; Manset, David; Barkhof, Frederik; Wahlund, Lars-Olof; Glatard, Tristan; Mangin, Jean-François; Frisoni, Giovanni B.

    2015-01-01

    Background and Purpose The measurement of cortical shrinkage is a candidate marker of disease progression in Alzheimer’s. This study evaluated the performance of two pipelines: Civet-CLASP (v1.1.9) and Freesurfer (v5.3.0). Methods Images from 185 ADNI1 cases (69 elderly controls (CTR), 37 stable MCI (sMCI), 27 progressive MCI (pMCI), and 52 Alzheimer (AD) patients) scanned at baseline, month 12, and month 24 were processed using the two pipelines and two interconnected e-infrastructures: neuGRID (https://neugrid4you.eu) and VIP (http://vip.creatis.insa-lyon.fr). The vertex-by-vertex cross-algorithm comparison was made possible applying the 3D gradient vector flow (GVF) and closest point search (CPS) techniques. Results The cortical thickness measured with Freesurfer was systematically lower by one third if compared to Civet’s. Cross-sectionally, Freesurfer’s effect size was significantly different in the posterior division of the temporal fusiform cortex. Both pipelines were weakly or mildly correlated with the Mini Mental State Examination score (MMSE) and the hippocampal volumetry. Civet differed significantly from Freesurfer in large frontal, parietal, temporal and occipital regions (p<0.05). In a discriminant analysis with cortical ROIs having effect size larger than 0.8, both pipelines gave no significant differences in area under the curve (AUC). Longitudinally, effect sizes were not significantly different in any of the 28 ROIs tested. Both pipelines weakly correlated with MMSE decay, showing no significant differences. Freesurfer mildly correlated with hippocampal thinning rate and differed in the supramarginal gyrus, temporal gyrus, and in the lateral occipital cortex compared to Civet (p<0.05). In a discriminant analysis with ROIs having effect size larger than 0.6, both pipelines yielded no significant differences in the AUC. Conclusions Civet appears slightly more sensitive to the typical AD atrophic pattern at the MCI stage, but both pipelines can accurately characterize the topography of cortical thinning at the dementia stage. PMID:25781983

  18. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  19. Improving Core Strength to Prevent Injury

    ERIC Educational Resources Information Center

    Oliver, Gretchen D.; Adams-Blair, Heather R.

    2010-01-01

    Regardless of the sport or skill, it is essential to have correct biomechanical positioning, or postural control, in order to maximize energy transfer. Correct postural control requires a strong, stable core. A strong and stable core allows one to transfer energy effectively as well as reduce undue stress. An unstable or weak core, on the other…

  20. The role of vegetation in the stability of forested slopes

    Treesearch

    Robert R. Ziemer

    1981-01-01

    Summary - Vegetation helps stabilize forested slopes by providing root strength and by modifying the saturated soil water regime. Plant roots can anchor through the soil mass into fractures in bedrock, can cross zones of weakness to more stable soil, and can provide interlocking long fibrous binders within a weak soil mass. In Mediterranean-type climates, having warm...

  1. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  2. Hybrid cryptosystem RSA - CRT optimization and VMPC

    NASA Astrophysics Data System (ADS)

    Rahmadani, R.; Mawengkang, H.; Sutarman

    2018-03-01

    Hybrid cryptosystem combines symmetric algorithms and asymmetric algorithms. This combination utilizes speeds on encryption/decryption processes of symmetric algorithms and asymmetric algorithms to secure symmetric keys. In this paper we propose hybrid cryptosystem that combine symmetric algorithms VMPC and asymmetric algorithms RSA - CRT optimization. RSA - CRT optimization speeds up the decryption process by obtaining plaintext with dp and p key only, so there is no need to perform CRT processes. The VMPC algorithm is more efficient in software implementation and reduces known weaknesses in RC4 key generation. The results show hybrid cryptosystem RSA - CRT optimization and VMPC is faster than hybrid cryptosystem RSA - VMPC and hybrid cryptosystem RSA - CRT - VMPC. Keyword : Cryptography, RSA, RSA - CRT, VMPC, Hybrid Cryptosystem.

  3. Stable orthogonal local discriminant embedding for linear dimensionality reduction.

    PubMed

    Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin

    2013-07-01

    Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.

  4. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  5. Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, Stephen M.; Slaughter, Charles D.

    1992-01-01

    Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.

  6. Flag-based detection of weak gas signatures in long-wave infrared hyperspectral image sequences

    NASA Astrophysics Data System (ADS)

    Marrinan, Timothy; Beveridge, J. Ross; Draper, Bruce; Kirby, Michael; Peterson, Chris

    2016-05-01

    We present a flag manifold based method for detecting chemical plumes in long-wave infrared hyperspectral movies. The method encodes temporal and spatial information related to a hyperspectral pixel into a flag, or nested sequence of linear subspaces. The technique used to create the flags pushes information about the background clutter, ambient conditions, and potential chemical agents into the leading elements of the flags. Exploiting this temporal information allows for a detection algorithm that is sensitive to the presence of weak signals. This method is compared to existing techniques qualitatively on real data and quantitatively on synthetic data to show that the flag-based algorithm consistently performs better on data when the SINRdB is low, and beats the ACE and MF algorithms in probability of detection for low probabilities of false alarm even when the SINRdB is high.

  7. Nighttime wind and scalar variability within and above an Amazonian canopy

    NASA Astrophysics Data System (ADS)

    Oliveira, Pablo E. S.; Acevedo, Otávio C.; Sörgel, Matthias; Tsokankunku, Anywhere; Wolff, Stefan; Araújo, Alessandro C.; Souza, Rodrigo A. F.; Sá, Marta O.; Manzi, Antônio O.; Andreae, Meinrat O.

    2018-03-01

    Nocturnal turbulent kinetic energy (TKE) and fluxes of energy, CO2 and O3 between the Amazon forest and the atmosphere are evaluated for a 20-day campaign at the Amazon Tall Tower Observatory (ATTO) site. The distinction of these quantities between fully turbulent (weakly stable) and intermittent (very stable) nights is discussed. Spectral analysis indicates that low-frequency, nonturbulent fluctuations are responsible for a large portion of the variability observed on intermittent nights. In these conditions, the low-frequency exchange may dominate over the turbulent transfer. In particular, we show that within the canopy most of the exchange of CO2 and H2O happens on temporal scales longer than 100 s. At 80 m, on the other hand, the turbulent fluxes are almost absent in such very stable conditions, suggesting a boundary layer shallower than 80 m. The relationship between TKE and mean winds shows that the stable boundary layer switches from the very stable to the weakly stable regime during intermittent bursts of turbulence. In general, fluxes estimated with long temporal windows that account for low-frequency effects are more dependent on the stability over a deeper layer above the forest than they are on the stability between the top of the canopy and its interior, suggesting that low-frequency processes are controlled over a deeper layer above the forest.

  8. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  9. CALCLENS: Weak lensing simulations for large-area sky surveys and second-order effects in cosmic shear power spectra

    NASA Astrophysics Data System (ADS)

    Becker, Matthew Rand

    I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.

  10. CALCLENS: weak lensing simulations for large-area sky surveys and second-order effects in cosmic shear power spectra

    NASA Astrophysics Data System (ADS)

    Becker, Matthew R.

    2013-10-01

    I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.

  11. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    NASA Astrophysics Data System (ADS)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  12. Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor

    NASA Astrophysics Data System (ADS)

    Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui

    2018-05-01

    At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Stanier, Adam John

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  14. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

  15. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  16. Uniformly stable backpropagation algorithm to train a feedforward neural network.

    PubMed

    Rubio, José de Jesús; Angelov, Plamen; Pacheco, Jaime

    2011-03-01

    Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.

  17. Boosting Learning Algorithm for Stock Price Forecasting

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2018-03-01

    To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.

  18. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    NASA Astrophysics Data System (ADS)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  19. A novel acenocoumarol pharmacogenomic dosing algorithm for the Greek population of EU-PACT trial.

    PubMed

    Ragia, Georgia; Kolovou, Vana; Kolovou, Genovefa; Konstantinides, Stavros; Maltezos, Efstratios; Tavridou, Anna; Tziakas, Dimitrios; Maitland-van der Zee, Anke H; Manolopoulos, Vangelis G

    2017-01-01

    To generate and validate a pharmacogenomic-guided (PG) dosing algorithm for acenocoumarol in the Greek population. To compare its performance with other PG algorithms developed for the Greek population. A total of 140 Greek patients participants of the EU-PACT trial for acenocoumarol, a randomized clinical trial that prospectively compared the effect of a PG dosing algorithm with a clinical dosing algorithm on the percentage of time within INR therapeutic range, who reached acenocoumarol stable dose were included in the study. CYP2C9 and VKORC1 genotypes, age and weight affected acenocoumarol dose and predicted 53.9% of its variability. EU-PACT PG algorithm overestimated acenocoumarol dose across all different CYP2C9/VKORC1 functional phenotype bins (predicted dose vs stable dose in normal responders 2.31 vs 2.00 mg/day, p = 0.028, in sensitive responders 1.72 vs 1.50 mg/day, p = 0.003, in highly sensitive responders 1.39 vs 1.00 mg/day, p = 0.029). The PG algorithm previously developed for the Greek population overestimated the dose in normal responders (2.51 vs 2.00 mg/day, p < 0.001). Ethnic-specific dosing algorithm is suggested for better prediction of acenocoumarol dosage requirements in patients of Greek origin.

  20. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  1. Graph theoretical stable allocation as a tool for reproduction of control by human operators

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla

    2016-04-01

    During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.

  2. Successful Manipulation in Stable Marriage Model with Complete Preference Lists

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hirotatsu; Matsui, Tomomi

    This paper deals with a strategic issue in the stable marriage model with complete preference lists (i.e., a preference list of an agent is a permutation of all the members of the opposite sex). Given complete preference lists of n men over n women, and a marriage µ, we consider the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ. We show a simple necessary and sufficient condition for the existence of a set of preference lists of women over men. Our condition directly gives an O(n2) time algorithm for finding a set of preference lists, if it exists.

  3. Block LU factorization

    NASA Technical Reports Server (NTRS)

    Demmel, James W.; Higham, Nicholas J.; Schreiber, Robert S.

    1992-01-01

    Many of the currently popular 'block algorithms' are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstable in general. It is shown here that block LU factorization is stable if A is block diagonally dominant by columns. Moreover, for a general matrix the level of instability in block LU factorization can be founded in terms of the condition number kappa(A) and the growth factor for Gaussian elimination without pivoting. A consequence is that block LU factorization is stable for a matrix A that is symmetric positive definite or point diagonally dominant by rows or columns as long as A is well-conditioned.

  4. Advanced Avionics Verification and Validation Phase II (AAV&V-II)

    DTIC Science & Technology

    1999-01-01

    Algorithm 2-8 2.7 The Weak Control Dependence Algorithm 2-8 2.8 The Indirect Dependence Algorithms 2-9 2.9 Improvements to the Pleiades Object...describes some modifications made to the Pleiades object management system to increase the speed of the analysis. 2.1 THE INTERPROCEDURAL CONTROL FLOW...slow as the edges in the graph increased. The time to insert edges was addressed by enhancements to the Pleiades object management system, which are

  5. Determination of boundary layer top on the basis of the characteristics of atmospheric particles

    NASA Astrophysics Data System (ADS)

    Liu, Boming; Ma, Yingying; Gong, Wei; Zhang, Ming; Yang, Jian

    2018-04-01

    The planetary boundary layer (PBL) is the lowest layer of the atmosphere that can be directly influenced with the Earth's surface. This layer can also respond to surface forcing. The determination of the PBL is significant to environmental and climate research. PBL can also serve as an input parameter for further data processing with atmospheric models. Traditional detection algorithms are susceptible to errors associated with the vertical distribution of aerosol concentrations. To overcome this limitation, a maximum difference search (MDS) algorithm was proposed to calculate the top of the boundary layer based on differences in particle characteristics. The top positions of the PBL from MDS algorithm under different convection states were compared with those from conventional methods. Experimental results demonstrated that the MDS method can determine the top of the boundary layer precisely. The proposed algorithm can also be used to calculate the top of the PBL accurately under weak convection conditions where the traditional methods cannot be applied. Finally, experimental data from June 2015 to December 2015 were analysed to verify the reliability of the MDS algorithm. The correlation coefficients R2 (RMSE) between the results of MDS algorithm and radiosonde measurements were 0.53 (115 m), 0.79 (141 m) and 0.96 (43 m) under weak, moderate and strong convections, respectively. These findings indicated that the proposed method possessed a good feasibility and stability.

  6. An underdamped stochastic resonance method with stable-state matching for incipient fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Qiao, Zijian; Xu, Xuefang; Lin, Jing; Niu, Shantao

    2017-09-01

    Most traditional overdamped monostable, bistable and even tristable stochastic resonance (SR) methods have three shortcomings in weak characteristic extraction: (1) their potential structures characterized by single stable-state type are insufficient to match with the complicated and diverse mechanical vibration signals; (2) they vulnerably suffer the interference from multiscale noise and largely depend on the help of highpass filters whose parameters are selected subjectively, probably resulting in false detection; and (3) their rescaling factors are fixed as constants generally, thereby ignoring the synergistic effect among vibration signals, potential structures and rescaling factors. These three shortcomings have limited the enhancement ability of SR. To explore the SR potential, this paper initially investigates the SR in a multistable system by calculating its output spectral amplification, further analyzes its output frequency response numerically, then examines the effect of both damping and rescaling factors on output responses and finally presents a promising underdamped SR method with stable-state matching for incipient bearing fault diagnosis. This method has three advantages: (1) the diversity of stable-state types in a multistable potential makes it easy to match with various vibration signals; (2) the underdamped multistable SR, equivalent to a moving nonlinear bandpass filter that is dependent on the rescaling factors, is able to suppress the multiscale noise; and (3) the synergistic effect among vibration signals, potential structures and rescaling and damping factors is achieved using quantum genetic algorithms whose fitness functions are new weighted signal-to-noise ratio (WSNR) instead of SNR. Therefore, the proposed method is expected to possess good enhancement ability. Simulated and experimental data of rolling element bearings demonstrate its effectiveness. The comparison results show that the proposed method is able to obtain higher amplitude at target frequency and larger output WSNR, and performs better than traditional SR methods.

  7. Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions

    NASA Technical Reports Server (NTRS)

    Rauch, Kevin P.; Holman, Matthew

    1999-01-01

    We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessore, Nicolas; Metcalf, R. Benton; Winther, Hans A.

    A number of alternatives to general relativity exhibit gravitational screening in the non-linear regime of structure formation. We describe a set of algorithms that can produce weak lensing maps of large scale structure in such theories and can be used to generate mock surveys for cosmological analysis. By analysing a few basic statistics we indicate how these alternatives can be distinguished from general relativity with future weak lensing surveys.

  9. Discovery of a meta-stable Al-Sm phase with unknown stoichiometry using a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Feng; McBrearty, Ian; Ott, R. T.

    Unknown crystalline phases observed during the devitrification process of glassy metal alloys significantly limit our ability to understand and control phase selection in these systems driven far from equilibrium. Here, we report a new meta-stable Al 5Sm phase identified by simultaneously searching Al-rich compositions of the Al–Sm system, using an efficient genetic algorithm. The excellent match between calculated and experimental X-ray diffraction patterns confirms that this new phase appeared in the crystallization of melt-spun Al 90Sm 10 alloys.

  10. Aligning a Receiving Antenna Array to Reduce Interference

    NASA Technical Reports Server (NTRS)

    Jongeling, Andre P.; Rogstad, David H.

    2009-01-01

    A digital signal-processing algorithm has been devised as a means of aligning (as defined below) the outputs of multiple receiving radio antennas in a large array for the purpose of receiving a desired weak signal transmitted by a single distant source in the presence of an interfering signal that (1) originates at another source lying within the antenna beam and (2) occupies a frequency band significantly wider than that of the desired signal. In the original intended application of the algorithm, the desired weak signal is a spacecraft telemetry signal, the antennas are spacecraft-tracking antennas in NASA s Deep Space Network, and the source of the wide-band interfering signal is typically a radio galaxy or a planet that lies along or near the line of sight to the spacecraft. The algorithm could also afford the ability to discriminate between desired narrow-band and nearby undesired wide-band sources in related applications that include satellite and terrestrial radio communications and radio astronomy. The development of the present algorithm involved modification of a prior algorithm called SUMPLE and a predecessor called SIMPLE. SUMPLE was described in Algorithm for Aligning an Array of Receiving Radio Antennas (NPO-40574), NASA Tech Briefs Vol. 30, No. 4 (April 2006), page 54. To recapitulate: As used here, aligning signifies adjusting the delays and phases of the outputs from the various antennas so that their relatively weak replicas of the desired signal can be added coherently to increase the signal-to-noise ratio (SNR) for improved reception, as though one had a single larger antenna. Prior to the development of SUMPLE, it was common practice to effect alignment by means of a process that involves correlation of signals in pairs. SIMPLE is an example of an algorithm that effects such a process. SUMPLE also involves correlations, but the correlations are not performed in pairs. Instead, in a partly iterative process, each signal is appropriately weighted and then correlated with a composite signal equal to the sum of the other signals.

  11. Comparison of weak-wind characteristics across different Surface Types in stable stratification

    NASA Astrophysics Data System (ADS)

    Freundorfer, Anita; Rehberg, Ingo; Thomas, Christoph

    2017-04-01

    Atmospheric transport in weak winds and very stable conditions is often characterized by phenomena collectively referred to as submeso motions since their time and spatial scales exceed those of turbulence, but are smaller than synoptic motions. Evidence is mounting that submeso motions invalidate models for turbulent dispersion and diffusion since their physics are not captured by current similarity theories. Typical phenomena in the weak-wind stable boundary layer include meandering motions, quasi two-dimensional pancake-vortices or wavelike motions. These motions may be subject to non-local forcing and sensitive to small topographic undulations. The invalidity of Taylor's hypothesis of frozen turbulence for submeso motions requires the use of sensor networks to provide observations in both time and space domains simultaneously. We present the results from the series of Advanced Resolution Canopy Flow Observations (ARCFLO) experiments using a sensor network consisting of 12 sonic anemometers and 12 thermohygrometers. The objective of ARCFLO was to observe the flow and the turbulent and submeso transport at a high spatial and temporal resolution at 4 different sites in the Pacific Northwest, USA. These sites represented a variable degree of terrain complexity (flat to mountainous) and vegetation architecture (grass to forest, open to dense). In our study, a distinct weak-wind regime was identified for each site using the threshold velocity at which the friction velocity becomes dependent upon the mean horizontal wind speed. Here we used the scalar mean of the wind speed because the friction velocity showed a clearer dependence on the scalar mean compared to the vector mean of the wind velocity. It was found that the critical speed for the weak wind regime is higher in denser vegetation. For an open agricultural area (Botany and Plant Pathology Farm) we found a critical wind speed of v_crit= (0.24±0.05) ms-1 while for a very dense forest (Mary's River Douglas Fir site) with a Leaf Area Index of LAI=9.4 m2m-2, the critical wind speed measures v_crit= (1.0±0.1) ms-1. Further analyses include developing an identification scheme to sample submeso motions using their quasi two-dimensional nature. Once separated from turbulence the properties of submeso motions and the impact of different canopy densities on those motions can be explored. We hypothesize that submeso motions are the main generating mechanism for the locally confined and intermittent turbulence in the weak-wind and stable boundary layers.

  12. Antenna Controller Replacement Software

    NASA Technical Reports Server (NTRS)

    Chao, Roger Y.; Morgan, Scott C.; Strain, Martha M.; Rockwell, Stephen T.; Shimizu, Kenneth J.; Tehrani, Barzia J.; Kwok, Jaclyn H.; Tuazon-Wong, Michelle; Valtier, Henry; Nalbandi, Reza; hide

    2010-01-01

    The Antenna Controller Replacement (ACR) software accurately points and monitors the Deep Space Network (DSN) 70-m and 34-m high-efficiency (HEF) ground-based antennas that are used to track primarily spacecraft and, periodically, celestial targets. To track a spacecraft, or other targets, the antenna must be accurately pointed at the spacecraft, which can be very far away with very weak signals. ACR s conical scanning capability collects the signal in a circular pattern around the target, calculates the location of the strongest signal, and adjusts the antenna pointing to point directly at the spacecraft. A real-time, closed-loop servo control algorithm performed every 0.02 second allows accurate positioning of the antenna in order to track these distant spacecraft. Additionally, this advanced servo control algorithm provides better antenna pointing performance in windy conditions. The ACR software provides high-level commands that provide a very easy user interface for the DSN operator. The operator only needs to enter two commands to start the antenna and subreflector, and Master Equatorial tracking. The most accurate antenna pointing is accomplished by aligning the antenna to the Master Equatorial, which because of its small size and sheltered location, has the most stable pointing. The antenna has hundreds of digital and analog monitor points. The ACR software provides compact displays to summarize the status of the antenna, subreflector, and the Master Equatorial. The ACR software has two major functions. First, it performs all of the steps required to accurately point the antenna (and subreflector and Master Equatorial) at the spacecraft (or celestial target). This involves controlling the antenna/ subreflector/Master-Equatorial hardware, initiating and monitoring the correct sequence of operations, calculating the position of the spacecraft relative to the antenna, executing the real-time servo control algorithm to maintain the correct position, and monitoring tracking performance.

  13. Digitally balanced detection for optical tomography.

    PubMed

    Hafiz, Rehan; Ozanyan, Krikor B

    2007-10-01

    Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.

  14. Detecting and visualizing weak signatures in hyperspectral data

    NASA Astrophysics Data System (ADS)

    MacPherson, Duncan James

    This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.

  15. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  16. Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System

    NASA Astrophysics Data System (ADS)

    Meng, X. Z.; Feng, H. B.

    2017-10-01

    This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.

  17. Development of Phase-Stable Photon Upconverters for Efficient Solar Energy Utilization

    NASA Astrophysics Data System (ADS)

    Murakami, Yoichi

    Photon upconversion based on triplet-triplet annihilation (TTA) of excited triplet molecules is drawing attention due to its applicability for weak incident light, possessing a potential for improving efficiencies of solar energy conversion devices. Since energy transfer between triplet levels of different molecules and TTA are based on the Dexter mechanism, inter-molecular collision is necessary and hence the majority of previous studies have been done with organic solvents, which are volatile and flammable. This paper presents the development and characterization of phase-stable photon upconverters fabricated with ionic liquids, which are room temperature molten salts with negligible vapor pressure and high thermal stability. The employed aromatic molecules, which are carrier of photo-created energies and are non-polar (or weakly polar) molecules, are found to be stable in the polar environment of ionic liquids, contrary to expectation. The mechanism of the stable solvation is proposed. The upconversion quantum yields are found to rapidly saturate as the excitation light power increases. An analytical model was developed and compared with the experimental data. It is shown that ionic liquids are not viscous media for the purpose of TTA-based upconversion.

  18. On Nash-Equilibria of Approximation-Stable Games

    NASA Astrophysics Data System (ADS)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  19. Wind velocity profile reconstruction from intensity fluctuations of a plane wave propagating in a turbulent atmosphere.

    PubMed

    Banakh, V A; Marakasov, D A

    2007-08-01

    Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.

  20. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  1. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  2. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  3. Generalized Detectability for Discrete Event Systems

    PubMed Central

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  4. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  5. A note on libration point orbits, temporary capture and low-energy transfers

    NASA Astrophysics Data System (ADS)

    Fantino, E.; Gómez, G.; Masdemont, J. J.; Ren, Y.

    2010-11-01

    In the circular restricted three-body problem (CR3BP) the weak stability boundary (WSB) is defined as a boundary set in the phase space between stable and unstable motion relative to the second primary. At a given energy level, the boundaries of such region are provided by the stable manifolds of the central objects of the L1 and L2 libration points, i.e., the two planar Lyapunov orbits. Besides, the unstable manifolds of libration point orbits (LPOs) around L1 and L2 have been identified as responsible for the weak or temporary capture around the second primary of the system. These two issues suggest the existence of natural dynamical channels between the Earth's vicinity and the Sun-Earth libration points L1 and L2. Furthermore, it has been shown that the Sun-Earth L2 central unstable manifolds can be linked, through an heteroclinic connection, to the central stable manifolds of the L2 point in the Earth-Moon three-body problem. This concept has been applied to the design of low energy transfers (LETs) from the Earth to the Moon. In this contribution we consider all the above three issues, i.e., weak stability boundaries, temporary capture and low energy transfers, and we discuss the role played by the invariant manifolds of LPOs in each of them. The study is made in the planar approximation.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Brian E. J.; Cronin, Timothy W.; Bitz, Cecilia M., E-mail: brose@albany.edu

    Planetary obliquity determines the meridional distribution of the annual mean insolation. For obliquity exceeding 55°, the weakest insolation occurs at the equator. Stable partial snow and ice cover on such a planet would be in the form of a belt about the equator rather than polar caps. An analytical model of planetary climate is used to investigate the stability of ice caps and ice belts over the widest possible range of parameters. The model is a non-dimensional diffusive Energy Balance Model, representing insolation, heat transport, and ice−albedo feedback on a spherical planet. A complete analytical solution for any obliquity ismore » given and validated against numerical solutions of a seasonal model in the “deep-water” regime of weak seasonal ice line migration. Multiple equilibria and unstable transitions between climate states (ice-free, Snowball, or ice cap/belt) are found over wide swaths of parameter space, including a “Large Ice-Belt Instability” and “Small Ice-Belt Instability” at high obliquity. The Snowball catastrophe is avoided at weak radiative forcing in two different scenarios: weak albedo feedback and inefficient heat transport (favoring stable partial ice cover), or efficient transport at high obliquity (favoring ice-free conditions). From speculative assumptions about distributions of planetary parameters, three-fourths to four-fifths of all planets with stable partial ice cover should be in the form of Earth-like polar caps.« less

  7. A weak-coupling immersed boundary method for fluid-structure interaction with low density ratio of solid to fluid

    NASA Astrophysics Data System (ADS)

    Kim, Woojin; Lee, Injae; Choi, Haecheon

    2018-04-01

    We present a weak-coupling approach for fluid-structure interaction with low density ratio (ρ) of solid to fluid. For accurate and stable solutions, we introduce predictors, an explicit two-step method and the implicit Euler method, to obtain provisional velocity and position of fluid-structure interface at each time step, respectively. The incompressible Navier-Stokes equations, together with these provisional velocity and position at the fluid-structure interface, are solved in an Eulerian coordinate using an immersed-boundary finite-volume method on a staggered mesh. The dynamic equation of an elastic solid-body motion, together with the hydrodynamic force at the provisional position of the interface, is solved in a Lagrangian coordinate using a finite element method. Each governing equation for fluid and structure is implicitly solved using second-order time integrators. The overall second-order temporal accuracy is preserved even with the use of lower-order predictors. A linear stability analysis is also conducted for an ideal case to find the optimal explicit two-step method that provides stable solutions down to the lowest density ratio. With the present weak coupling, three different fluid-structure interaction problems were simulated: flows around an elastically mounted rigid circular cylinder, an elastic beam attached to the base of a stationary circular cylinder, and a flexible plate, respectively. The lowest density ratios providing stable solutions are searched for the first two problems and they are much lower than 1 (ρmin = 0.21 and 0.31, respectively). The simulation results agree well with those from strong coupling suggested here and also from previous numerical and experimental studies, indicating the efficiency and accuracy of the present weak coupling.

  8. On the fusion of tuning parameters of fuzzy rules and neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.

  9. An improved weakly compressible SPH method for simulating free surface flows of viscous and viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Deng, Xiao-Long

    2016-04-01

    In this paper, an improved weakly compressible smoothed particle hydrodynamics (SPH) method is proposed to simulate transient free surface flows of viscous and viscoelastic fluids. The improved SPH algorithm includes the implementation of (i) the mixed symmetric correction of kernel gradient to improve the accuracy and stability of traditional SPH method and (ii) the Rusanov flux in the continuity equation for improving the computation of pressure distributions in the dynamics of liquids. To assess the effectiveness of the improved SPH algorithm, a number of numerical examples including the stretching of an initially circular water drop, dam breaking flow against a vertical wall, the impact of viscous and viscoelastic fluid drop with a rigid wall, and the extrudate swell of viscoelastic fluid have been presented and compared with available numerical and experimental data in literature. The convergent behavior of the improved SPH algorithm has also been studied by using different number of particles. All numerical results demonstrate that the improved SPH algorithm proposed here is capable of modeling free surface flows of viscous and viscoelastic fluids accurately and stably, and even more important, also computing an accurate and little oscillatory pressure field.

  10. Data Mining for New Two- and One-Dimensional Weakly Bonded Solids and Lattice-Commensurate Heterostructures.

    PubMed

    Cheon, Gowoon; Duerloo, Karel-Alexander N; Sendek, Austin D; Porter, Chase; Chen, Yuan; Reed, Evan J

    2017-03-08

    Layered materials held together by weak interactions including van der Waals forces, such as graphite, have attracted interest for both technological applications and fundamental physics in their layered form and as an isolated single-layer. Only a few dozen single-layer van der Waals solids have been subject to considerable research focus, although there are likely to be many more that could have superior properties. To identify a broad spectrum of layered materials, we present a novel data mining algorithm that determines the dimensionality of weakly bonded subcomponents based on the atomic positions of bulk, three-dimensional crystal structures. By applying this algorithm to the Materials Project database of over 50,000 inorganic crystals, we identify 1173 two-dimensional layered materials and 487 materials that consist of weakly bonded one-dimensional molecular chains. This is an order of magnitude increase in the number of identified materials with most materials not known as two- or one-dimensional materials. Moreover, we discover 98 weakly bonded heterostructures of two-dimensional and one-dimensional subcomponents that are found within bulk materials, opening new possibilities for much-studied assembly of van der Waals heterostructures. Chemical families of materials, band gaps, and point groups for the materials identified in this work are presented. Point group and piezoelectricity in layered materials are also evaluated in single-layer forms. Three hundred and twenty-five of these materials are expected to have piezoelectric monolayers with a variety of forms of the piezoelectric tensor. This work significantly extends the scope of potential low-dimensional weakly bonded solids to be investigated.

  11. Models of magnetic field generation in partly stable planetary cores: Applications to Mercury and Saturn

    NASA Astrophysics Data System (ADS)

    Christensen, Ulrich R.; Wicht, Johannes

    2008-07-01

    A substantial part of Mercury's iron core may be stably stratified because the temperature gradient is subadiabatic. A dynamo would operate only in a deep sublayer. We show that such a situation arises for a wide range of values for the heat flow and the sulfur content in the core. In Saturn the upper part of the metallic hydrogen core could be stably stratified because of helium depletion. The magnetic field is unusually weak in the case of Mercury and unusually axisymmetric at Saturn. We study numerical dynamo models in rotating spherical shells with a stable outer region. The control parameters are chosen such that the magnetic Reynolds number is in the range of expected Mercury values. Because of its slow rotation, Mercury may be in a regime where the dipole contribution to the internal magnetic field is weak. Most of our models are in this regime, where the dynamo field consists mainly of rapidly varying higher multipole components. They can hardly pass the stable conducting layer because of the skin effect. The weak low-degree components vary more slowly and control the structure of the field outside the core, whose strength matches the observed field strength at Mercury. In some models the axial dipole dominates at the planet's surface and in others the axial quadrupole is dominant. Differential rotation in the stable layer, representing a thermal wind, is important for attenuating non-axisymmetric components in the exterior field. In some models that we relate to Saturn the axial dipole is intrinsically strong inside the dynamo. The surface field strength is much larger than in the other cases, but the stable layer eliminates non-axisymmetric modes. The Messenger and Bepi Colombo space missions can test our predictions that Mercury's field is large-scaled, fairly axisymmetric, and shows no secular variations on the decadal time scale.

  12. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  13. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  14. Discovery of a meta-stable Al–Sm phase with unknown stoichiometry using a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Feng; McBrearty, Ian; Ott, R T

    Unknown crystalline phases observed during the devitrification process of glassy metal alloys significantly limit our ability to understand and control phase selection in these systems driven far from equilibrium. Here, we report a new meta-stable Al5Sm phase identified by simultaneously searching Al-rich compositions of the Al-Sm system, using an efficient genetic algorithm. The excellent match between calculated and experimental X-ray diffraction patterns confirms that this new phase appeared in the crystallization of melt-spun Al90Sm10 alloys. Published by Elsevier Ltd. on behalf of Acta Materialia Inc.

  15. Stability of submarine slopes in the northern South China Sea: a numerical approach

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Luan, Xiwu

    2013-01-01

    Submarine landslides occur frequently on most continental margins. They are effective mechanisms of sediment transfer but also a geological hazard to seafloor installations. In this paper, submarine slope stability is evaluated using a 2D limit equilibrium method. Considerations of slope, sediment, and triggering force on the factor of safety (FOS) were calculated in drained and undrained ( Φ=0) cases. Results show that submarine slopes are stable when the slope is <16° under static conditions and without a weak interlayer. With a weak interlayer, slopes are stable at <18° in the drained case and at <9° in the undrained case. Earthquake loading can drastically reduce the shear strength of sediment with increased pore water pressure. The slope became unstable at >13° with earthquake peak ground acceleration (PGA) of 0.5 g; whereas with a weak layer, a PGA of 0.2 g could trigger instability at slopes >10°, and >3° for PGA of 0.5 g. The northern slope of the South China Sea is geomorphologically stable under static conditions. However, because of the possibility of high PGA at the eastern margin of the South China Sea, submarine slides are likely on the Taiwan Bank slope and eastern part of the Dongsha slope. Therefore, submarine slides recognized in seismic profiles on the Taiwan Bank slope would be triggered by an earthquake, the most important factor for triggering submarine slides on the northern slope of the South China Sea. Considering the distribution of PGA, we consider the northern slope of the South China Sea to be stable, excluding the Taiwan Bank slope, which is tectonically active.

  16. Stable Extraction of Threshold Voltage Using Transconductance Change Method for CMOS Modeling, Simulation and Characterization

    NASA Astrophysics Data System (ADS)

    Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook

    2004-04-01

    We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.

  17. On the degree conjecture for separability of multipartite quantum states

    NASA Astrophysics Data System (ADS)

    Hassan, Ali Saif M.; Joag, Pramod S.

    2008-01-01

    We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matrices match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.

  18. Identifying personal microbiomes using metagenomic codes

    PubMed Central

    Franzosa, Eric A.; Huang, Katherine; Meadow, James F.; Gevers, Dirk; Lemon, Katherine P.; Bohannan, Brendan J. M.; Huttenhower, Curtis

    2015-01-01

    Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30–300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability—a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability. PMID:25964341

  19. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    NASA Astrophysics Data System (ADS)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  20. Diagnosis and Treatment of Diseases of Tactical Importance to US CENTCOM Forces

    DTIC Science & Technology

    1990-01-01

    dysphagia . This is followed by a symmetrical, descending, progrrssive weakness of the extremities along with weakness of the respiratory muscles ...liver and spleen abscesses anorexia 6. Pulmonary (15-251)s cough 7. Systemic (almost 100%): fever night sweats malaiseweakness weight loss 8. Cutaneous...following algorithm provides an effective, effi- cient approach: Symtom: S stools/day, fever, abdominal j pain, weight loss , blood, pus, or mucus in stool

  1. Time and space analysis of turbulence of gravity surface waves

    NASA Astrophysics Data System (ADS)

    Mordant, Nicolas; Aubourg, Quentin; Viboud, Samuel; Sommeria, Joel

    2016-11-01

    Wave turbulence is a statistical state made of a very large number of nonlinearly interacting waves. The Weak Turbulence Theory was developed to describe such a situation in the weakly nonlinear regime. Although, oceanic data tend to be compatible with the theory, laboratory data fail to fulfill the theoretical predictions. A space-time resolved measurement of the waves have proven to be especially fruitful to identify the mechanism at play in turbulence of gravity-capillary waves. We developed an image processing algorithm to measure the motion of the surface of water with both space and time resolution. We first seed the surface with slightly buoyant polystyrene particles and use 3 cameras to reconstruct the surface. Our stereoscopic algorithm is coupled to PIV so that to obtain both the surface deformation and the velocity of the water surface. Such a coupling is shown to improve the sensitivity of the measurement by one order of magnitude. We use this technique to probe the existence of weakly nonlinear turbulence excited by two small wedge wavemakers in a 13-m diameter wave flume. We observe a truly weakly nonlinear regime of isotropic wave turbulence. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No 647018-WATU).

  2. Methods for reducing interference in the Complementary Learning Systems model: oscillating inhibition and autonomous memory rehearsal.

    PubMed

    Norman, Kenneth A; Newman, Ehren L; Perotte, Adler J

    2005-11-01

    The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.

  3. Topology-Scaling Identification of Layered Solids and Stable Exfoliated 2D Materials.

    PubMed

    Ashton, Michael; Paul, Joshua; Sinnott, Susan B; Hennig, Richard G

    2017-03-10

    The Materials Project crystal structure database has been searched for materials possessing layered motifs in their crystal structures using a topology-scaling algorithm. The algorithm identifies and measures the sizes of bonded atomic clusters in a structure's unit cell, and determines their scaling with cell size. The search yielded 826 stable layered materials that are considered as candidates for the formation of two-dimensional monolayers via exfoliation. Density-functional theory was used to calculate the exfoliation energy of each material and 680 monolayers emerge with exfoliation energies below those of already-existent two-dimensional materials. The crystal structures of these two-dimensional materials provide templates for future theoretical searches of stable two-dimensional materials. The optimized structures and other calculated data for all 826 monolayers are provided at our database (https://materialsweb.org).

  4. Stable Kalman filters for processing clock measurement data

    NASA Technical Reports Server (NTRS)

    Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.

    1989-01-01

    Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.

  5. Design of weak link channel-cut crystals for fast QEXAFS monochromators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polheim, O. von, E-mail: vonpolheim@uni-wuppertal.de; Müller, O.; Lützenkirchen-Hecht, D.

    2016-07-27

    A weak link channel-cut crystal, optimized for dedicated Quick EXAFS monochromators and measurements, was designed using finite element analysis. This channel-cut crystal offers precise detuning capabilities to enable suppression of higher harmonics in the virtually monochromatic beam. It was optimized to keep the detuning stable, withstanding the mechanical load, which occurs during oscillations with up to 50 Hz. First tests at DELTA (Dortmund, Germany), proved the design.

  6. Geology of a Stable Intraplate Region: The Cape Verde/Canary Basin,

    DTIC Science & Technology

    1982-03-01

    reflection records indicate a possible Eocene age up- lifting. Extensive island volcanism and sill and dike emplacement occurred during Miocene. Many abyssal...hills and small scale faults are related to this Miocene tectonic phase. Island volcanism has a con- tinuing influence on the sedimentary sections. The...Plate is capable of generating zones of weak- nesses. These weakness zones could be expected to localize island volcanism , create north/south-trending

  7. Renyi entanglement entropy of interacting fermions calculated using the continuous-time quantum Monte Carlo method.

    PubMed

    Wang, Lei; Troyer, Matthias

    2014-09-12

    We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.

  8. Generation of large-scale intrusions at baroclinic fronts: an analytical consideration with a reference to the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Kuzmina, Natalia

    2016-12-01

    Analytical solutions are found for the problem of instability of a weak geostrophic flow with linear velocity shear accounting for vertical diffusion of buoyancy. The analysis is based on the potential-vorticity equation in a long-wave approximation when the horizontal scale of disturbances is considered much larger than the local baroclinic Rossby radius. It is hypothesized that the solutions found can be applied to describe stable and unstable disturbances of the planetary scale with respect, in particular, to the Arctic Ocean, where weak baroclinic fronts with typical temporal variability periods on the order of several years or more have been observed and the β effect is negligible. Stable (decaying with time) solutions describe disturbances that, in contrast to the Rossby waves, can propagate to both the west and east, depending on the sign of the linear shear of geostrophic velocity. The unstable (growing with time) solutions are applied to explain the formation of large-scale intrusions at baroclinic fronts under the stable-stable thermohaline stratification observed in the upper layer of the Polar Deep Water in the Eurasian Basin. The suggested mechanism of formation of intrusions can be considered a possible alternative to the mechanism of interleaving at the baroclinic fronts due to the differential mixing.

  9. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  10. A provisional effective evaluation when errors are present in independent variables

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.

    1983-01-01

    Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.

  11. Biofriendly bonding processes for nanoporous implantable SU-8 microcapsules for encapsulated cell therapy.

    PubMed

    Nemani, Krishnamurthy; Kwon, Joonbum; Trivedi, Krutarth; Hu, Walter; Lee, Jeong-Bong; Gimi, Barjor

    2011-01-01

    Mechanically robust, cell encapsulating microdevices fabricated using photolithographic methods can lead to more efficient immunoisolation in comparison to cell encapsulating hydrogels. There is a need to develop adhesive bonding methods which can seal such microdevices under physiologically friendly conditions. We report the bonding of SU-8 based substrates through (i) magnetic self assembly, (ii) using medical grade photocured adhesive and (iii) moisture and photochemical cured polymerization. Magnetic self-assembly, carried out in biofriendly aqueous buffers, provides weak bonding not suitable for long term applications. Moisture cured bonding of covalently modified SU-8 substrates, based on silanol condensation, resulted in weak and inconsistent bonding. Photocured bonding using a medical grade adhesive and of acrylate modified substrates provided stable bonding. Of the methods evaluated, photocured adhesion provided the strongest and most stable adhesion.

  12. Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)

    2001-01-01

    An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.

  13. On the Complexity of the Metric TSP under Stability Considerations

    NASA Astrophysics Data System (ADS)

    Mihalák, Matúš; Schöngens, Marcel; Šrámek, Rastislav; Widmayer, Peter

    We consider the metric Traveling Salesman Problem (Δ-TSP for short) and study how stability (as defined by Bilu and Linial [3]) influences the complexity of the problem. On an intuitive level, an instance of Δ-TSP is γ-stable (γ> 1), if there is a unique optimum Hamiltonian tour and any perturbation of arbitrary edge weights by at most γ does not change the edge set of the optimal solution (i.e., there is a significant gap between the optimum tour and all other tours). We show that for γ ≥ 1.8 a simple greedy algorithm (resembling Prim's algorithm for constructing a minimum spanning tree) computes the optimum Hamiltonian tour for every γ-stable instance of the Δ-TSP, whereas a simple local search algorithm can fail to find the optimum even if γ is arbitrary. We further show that there are γ-stable instances of Δ-TSP for every 1 < γ< 2. These results provide a different view on the hardness of the Δ-TSP and give rise to a new class of problem instances which are substantially easier to solve than instances of the general Δ-TSP.

  14. Myotonic dystrophy mimicking postpolio syndrome in a polio survivor.

    PubMed

    Lim, Jae-Young; Kim, Kyoung-Eun; Choe, Gheeyoung

    2009-02-01

    We describe a 38-yr-old polio survivor with newly developed weakness from myotonic dystrophy. He suffered muscle atrophy and weakness in his legs as a result of poliomyelitis at the age of 3 yrs. After a stable interval of about 30 yrs, he felt new weakness and fatigue in his legs. Electromyography revealed generalized myotonic discharges, early recruitment, and findings of chronic denervation in his left leg. Genetic testing was consistent with myotonic dystrophy type 1. A biopsy from the right gastrocnemius revealed findings of both myotonic dystrophy and chronic denervation. This case report shows the importance of considering other uncommon conditions in the differential diagnoses of postpolio syndrome.

  15. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  16. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  17. Validating Actigraphy as a Measure of Sleep for Preschool Children

    PubMed Central

    Bélanger, Marie-Ève; Bernier, Annie; Paquet, Jean; Simard, Valérie; Carrier, Julie

    2013-01-01

    Study Objectives: The algorithms used to derive sleep variables from actigraphy were developed with adults. Because children change position during sleep more often than adults, algorithms may detect wakefulness when the child is actually sleeping (false negative). This study compares the validity of three algorithms for detecting sleep with actigraphy by comparing them to PSG in preschoolers. The putative influence of device location (wrist or ankle) is also examined. Methods: Twelve children aged 2 to 5 years simultaneously wore an actigraph on an ankle and a wrist (Actiwatch-L, Mini-Mitter/Respironics) during a night of PSG recording at home. Three algorithms were tested: one recommended for adults and two designed to decrease false negative detection of sleep in children. Results: Actigraphy generally showed good sensitivity (> 95%; PSG sleep detection) but low specificity (± 50%; PSG wake detection). Intraclass correlations between PSG and actigraphy variables were strong (> 0.80) for sleep latency, sleep duration, and sleep efficiency, but weak for number of awakenings (< 0.40). The two algorithms designed for children enhanced the validity of actigraphy in preschoolers and increased the proportion of actigraphy-scored wake epochs scored that were also PSG-identified as wake. Sleep variables derived from the ankle and wrist were not statistically different. Conclusion: Despite the weak detection of wakefulness, Acti-watch-L appears to be a useful instrument for assessing sleep in preschoolers when used with an adapted algorithm. Citation: Bélanger M; Bernier A; Paquet J; Simard V; Julie Carrier J. Validating actigraphy as a measure of sleep for pre-school children. J Clin Sleep Med 2013;9(7):701-706. PMID:23853565

  18. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  19. Improved method for peak picking in matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Kempka, Martin; Sjödahl, Johan; Björk, Anders; Roeraade, Johan

    2004-01-01

    A method for peak picking for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) is described. The method is based on the assumption that two sets of ions are formed during the ionization stage, which have Gaussian distributions but different velocity profiles. This gives rise to a certain degree of peak skewness. Our algorithm deconvolutes the peak and utilizes the fast velocity, bulk ion distribution for peak picking. Evaluation of the performance of the new method was conducted using peptide peaks from a bovine serum albumin (BSA) digest, and compared with the commercial peak-picking algorithms Centroid and SNAP. When using the new two-Gaussian algorithm, for strong signals the mass accuracy was equal to or marginally better than the results obtained from the commercial algorithms. However, for weak, distorted peaks, considerable improvement in both mass accuracy and precision was obtained. This improvement should be particularly useful in proteomics, where a lack of signal strength is often encountered when dealing with weakly expressed proteins. Finally, since the new peak-picking method uses information from the entire signal, no adjustments of parameters related to peak height have to be made, which simplifies its practical use. Copyright 2004 John Wiley & Sons, Ltd.

  20. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  1. Marginal Stability of Ion-Acoustic Waves in a Weakly Collisional Two-Temperature Plasma without a Current.

    DTIC Science & Technology

    1987-08-06

    ABSTRACT (Continue on reverse if necessary and identify by block number) The linearized Balescu -Lenard-Poisson equations are solved in the weakly...free plasma is . unresolved. The purpose of this report is to present a resolution based upon the Balescu -Lenard-Poisson equations. The Balescu -Lenard...acoustic waves become marginally stable. Gur re- sults are based on the closed form solution for the dielectric function for the line- arized Balescu -Lenard

  2. Wavelet threshold method of resolving noise interference in periodic short-impulse signals chaotic detection

    NASA Astrophysics Data System (ADS)

    Deng, Ke; Zhang, Lu; Luo, Mao-Kang

    2010-03-01

    The chaotic oscillator has already been considered as a powerful method to detect weak signals, even weak signals accompanied with noises. However, many examples, analyses and simulations indicate that chaotic oscillator detection system cannot guarantee the immunity to noises (even white noise). In fact the randomness of noises has a serious or even a destructive effect on the detection results in many cases. To solve this problem, we present a new detecting method based on wavelet threshold processing that can detect the chaotic weak signal accompanied with noise. All theoretical analyses and simulation experiments indicate that the new method reduces the noise interferences to detection significantly, thereby making the corresponding chaotic oscillator that detects the weak signals accompanied with noises more stable and reliable.

  3. Weak Galerkin method for the Biot’s consolidation model

    DOE PAGES

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    2017-08-23

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  4. Weak Galerkin method for the Biot’s consolidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  5. A probability tracking approach to segmentation of ultrasound prostate images using weak shape priors

    NASA Astrophysics Data System (ADS)

    Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.

    2010-03-01

    Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.

  6. The weak lensing analysis of the CFHTLS and NGVS RedGOLD galaxy clusters

    NASA Astrophysics Data System (ADS)

    Parroni, C.; Mei, S.; Erben, T.; Van Waerbeke, L.; Raichoor, A.; Ford, J.; Licitra, R.; Meneghetti, M.; Hildebrandt, H.; Miller, L.; Côté, P.; Covone, G.; Cuillandre, J.-C.; Duc, P.-A.; Ferrarese, L.; Gwyn, S. D. J.; Puzia, T. H.

    2017-12-01

    An accurate estimation of galaxy cluster masses is essential for their use in cosmological and astrophysical studies. We studied the accuracy of the optical richness obtained by our RedGOLD cluster detection algorithm tep{licitra2016a, licitra2016b} as a mass proxy, using weak lensing and X-ray mass measurements. We measured stacked weak lensing cluster masses for a sample of 1323 galaxy clusters in the Canada-France-Hawaii Telescope Legacy Survey W1 and the Next Generation Virgo Cluster Survey at 0.2

  7. Stochastic resonance algorithm applied to quantitative analysis for weak chromatographic signals of alkyl halides and alkyl benzenes in water samples.

    PubMed

    Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai

    2009-09-01

    The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.

  8. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  9. On the degree conjecture for separability of multipartite quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Ali Saif M.; Joag, Pramod S.

    2008-01-15

    We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matricesmore » match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.« less

  10. [Clinical applications of dosing algorithm in the predication of warfarin maintenance dose].

    PubMed

    Huang, Sheng-wen; Xiang, Dao-kang; An, Bang-quan; Li, Gui-fang; Huang, Ling; Wu, Hai-li

    2011-12-27

    To evaluate the feasibility of clinical application for genetic based dosing algorithm in the predication of warfarin maintenance dose in Chinese population. The clinical data were collected and blood samples harvested from a total of 126 patients undergoing heart valve replacement. The genotypes of VKORC1 and CYP2C9 were determined by melting curve analysis after PCR. They were divided randomly into the study and control groups. In the study group, the first three doses of warfarin were prescribed according to the predicted warfarin maintenance dose while warfarin was initiated at 2.5 mg/d in the control group. The warfarin doses were adjusted according to the measured international normalized ratio (INR) values. And all subjects were followed for 50 days after an initiation of warfarin therapy. At the end of a 50-day follow-up period, the proportions of the patients on a stable dose were 82.4% (42/51) and 62.5% (30/48) for the study and control groups respectively. The mean durations of reaching a stable dose of warfarin were (27.5 ± 1.8) and (34.7 ± 1.8) days and the median durations were (24.0 ± 1.7) and (33.0 ± 4.5) days in the study and control groups respectively. Significant differences existed in the durations of reaching a stable dose between the two groups (P = 0.012). Compared with the control group, the hazard ratio (HR) for the duration of reaching a stable dose was 1.786 in the study group (95%CI 1.088 - 2.875, P = 0.026). The predicted dosing algorithm incorporating genetic and non-genetic factors may shorten the duration of achieving efficiently a stable dose of warfarin. And the present study validates the feasibility of its clinical application.

  11. Validating clustering of molecular dynamics simulations using polymer models.

    PubMed

    Phillips, Joshua L; Colvin, Michael E; Newsam, Shawn

    2011-11-14

    Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers.

  12. Validating clustering of molecular dynamics simulations using polymer models

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers. PMID:22082218

  13. Research on rolling element bearing fault diagnosis based on genetic algorithm matching pursuit

    NASA Astrophysics Data System (ADS)

    Rong, R. W.; Ming, T. F.

    2017-12-01

    In order to solve the problem of slow computation speed, matching pursuit algorithm is applied to rolling bearing fault diagnosis, and the improvement are conducted from two aspects that are the construction of dictionary and the way to search for atoms. To be specific, Gabor function which can reflect time-frequency localization characteristic well is used to construct the dictionary, and the genetic algorithm to improve the searching speed. A time-frequency analysis method based on genetic algorithm matching pursuit (GAMP) algorithm is proposed. The way to set property parameters for the improvement of the decomposition results is studied. Simulation and experimental results illustrate that the weak fault feature of rolling bearing can be extracted effectively by this proposed method, at the same time, the computation speed increases obviously.

  14. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  15. The threshold algorithm: Description of the methodology and new developments

    NASA Astrophysics Data System (ADS)

    Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian

    2017-10-01

    Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.

  16. An accurate algorithm to calculate the Hurst exponent of self-similar processes

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.; Román-Sánchez, I. M.

    2014-06-01

    In this paper, we introduce a new approach which generalizes the GM2 algorithm (introduced in Sánchez-Granero et al. (2008) [52]) as well as fractal dimension algorithms (FD1, FD2 and FD3) (first appeared in Sánchez-Granero et al. (2012) [51]), providing an accurate algorithm to calculate the Hurst exponent of self-similar processes. We prove that this algorithm performs properly in the case of short time series when fractional Brownian motions and Lévy stable motions are considered. We conclude the paper with a dynamic study of the Hurst exponent evolution in the S&P500 index stocks.

  17. Conjugate gradient coupled with multigrid for an indefinite problem

    NASA Technical Reports Server (NTRS)

    Gozani, J.; Nachshon, A.; Turkel, E.

    1984-01-01

    An iterative algorithm for the Helmholtz equation is presented. This scheme was based on the preconditioned conjugate gradient method for the normal equations. The preconditioning is one cycle of a multigrid method for the discrete Laplacian. The smoothing algorithm is red-black Gauss-Seidel and is constructed so it is a symmetric operator. The total number of iterations needed by the algorithm is independent of h. By varying the number of grids, the number of iterations depends only weakly on k when k(3)h(2) is constant. Comparisons with a SSOR preconditioner are presented.

  18. Superiorization-based multi-energy CT image reconstruction

    PubMed Central

    Yang, Q; Cong, W; Wang, G

    2017-01-01

    The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142

  19. A novel complex networks clustering algorithm based on the core influence of nodes.

    PubMed

    Tong, Chao; Niu, Jianwei; Dai, Bin; Xie, Zhongyu

    2014-01-01

    In complex networks, cluster structure, identified by the heterogeneity of nodes, has become a common and important topological property. Network clustering methods are thus significant for the study of complex networks. Currently, many typical clustering algorithms have some weakness like inaccuracy and slow convergence. In this paper, we propose a clustering algorithm by calculating the core influence of nodes. The clustering process is a simulation of the process of cluster formation in sociology. The algorithm detects the nodes with core influence through their betweenness centrality, and builds the cluster's core structure by discriminant functions. Next, the algorithm gets the final cluster structure after clustering the rest of the nodes in the network by optimizing method. Experiments on different datasets show that the clustering accuracy of this algorithm is superior to the classical clustering algorithm (Fast-Newman algorithm). It clusters faster and plays a positive role in revealing the real cluster structure of complex networks precisely.

  20. The thermal stability of the nanograin structure in a weak solute segregation system.

    PubMed

    Tang, Fawei; Song, Xiaoyan; Wang, Haibin; Liu, Xuemei; Nie, Zuoren

    2017-02-08

    A hybrid model that combines first principles calculations and thermodynamic evaluation was developed to describe the thermal stability of a nanocrystalline solid solution with weak segregation. The dependence of the solute segregation behavior on the electronic structure, solute concentration, grain size and temperature was demonstrated, using the nanocrystalline Cu-Zn system as an example. The modeling results show that the segregation energy changes with the solute concentration in a form of nonmonotonic function. The change in the total Gibbs free energy indicates that at a constant solute concentration and a given temperature, a nanocrystalline structure can remain stable when the initial grain size is controlled in a critical range. In experiments, dense nanocrystalline Cu-Zn alloy bulk was prepared, and a series of annealing experiments were performed to examine the thermal stability of the nanograins. The experimental measurements confirmed the model predictions that with a certain solute concentration, a state of steady nanograin growth can be achieved at high temperatures when the initial grain size is controlled in a critical range. The present work proposes that in weak solute segregation systems, the nanograin structure can be kept thermally stable by adjusting the solute concentration and initial grain size.

  1. Weak lasing in one-dimensional polariton superlattices.

    PubMed

    Zhang, Long; Xie, Wei; Wang, Jian; Poddubny, Alexander; Lu, Jian; Wang, Yinglei; Gu, Jie; Liu, Wenhui; Xu, Dan; Shen, Xuechu; Rubo, Yuri G; Altshuler, Boris L; Kavokin, Alexey V; Chen, Zhanghai

    2015-03-31

    Bosons with finite lifetime exhibit condensation and lasing when their influx exceeds the lasing threshold determined by the dissipative losses. In general, different one-particle states decay differently, and the bosons are usually assumed to condense in the state with the longest lifetime. Interaction between the bosons partially neglected by such an assumption can smear the lasing threshold into a threshold domain--a stable lasing many-body state exists within certain intervals of the bosonic influxes. This recently described weak lasing regime is formed by the spontaneously symmetry breaking and phase-locking self-organization of bosonic modes, which results in an essentially many-body state with a stable balance between gains and losses. Here we report, to our knowledge, the first observation of the weak lasing phase in a one-dimensional condensate of exciton-polaritons subject to a periodic potential. Real and reciprocal space photoluminescence images demonstrate that the spatial period of the condensate is twice as large as the period of the underlying periodic potential. These experiments are realized at room temperature in a ZnO microwire deposited on a silicon grating. The period doubling takes place at a critical pumping power, whereas at a lower power polariton emission images have the same periodicity as the grating.

  2. Slower speed and stronger coupling: adaptive mechanisms of chaos synchronization.

    PubMed

    Wang, Xiao Fan

    2002-06-01

    We show that two initially weakly coupled chaotic systems can achieve synchronization by adaptively reducing their speed and/or enhancing the coupling strength. Explicit adaptive algorithms for speed reduction and coupling enhancement are provided. We apply these algorithms to the synchronization of two coupled Lorenz systems. It is found that after a long-time adaptive process, the two coupled chaotic systems can achieve synchronization with almost the minimum required coupling-speed ratio.

  3. Periodic, complexiton solutions and stability for a (2+1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Yin, Hui-Min; Tian, Bo; Zhao, Xin-Chao

    2018-06-01

    This paper presents an investigation of a (2 + 1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation. Periodic and complexiton solutions are obtained. Solitons solutions are also gotten through the periodic solutions. Numerical solutions via the split step method are stable. Effects of the weak and strong modulation instability on the solitons are shown: the weak modulation instability permits an observable soliton, and the strong one overwhelms its development.

  4. Dynamic delamination of patterned thin films

    NASA Astrophysics Data System (ADS)

    Kandula, Soma S. V.; Tran, Phuong; Geubelle, Philippe H.; Sottos, Nancy R.

    2008-12-01

    We investigate laser-induced dynamic delamination of a patterned thin film on a substrate. Controlled delamination results from our insertion of a weak adhesion region beneath the film. The inertial forces acting on the weakly bonded portion of the film lead to stable propagation of a crack along the film/substrate interface. Through a simple energy balance, we extract the critical energy for interfacial failure, a quantity that is difficult and sometimes impossible to characterize by more conventional methods for many thin film/substrate combinations.

  5. Not mathematics Education, not Mathematics education but Mathematics Education

    ERIC Educational Resources Information Center

    Galbraith, P. L.

    1977-01-01

    Weaknesses in the initial preparation of school mathematics teachers are proposed. Emphasis is on the underdevelopment of global understanding in lieu of the manipulation of symbols or the performing of complex algorithms. (MN)

  6. Weak convergence to isotropic complex [Formula: see text] random measure.

    PubMed

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  7. Quality assessment of geogrids used for subgrade treatment.

    DOT National Transportation Integrated Search

    2012-12-01

    Geogrid reinforcements have been used by the Indiana Department of Transportation (INDOT) to construct stable subgrade foundations : and to provide a working platform for construction over weak and soft soils. Use of geogrid reinforcement in a paveme...

  8. Performance Analysis of Continuous Black-Box Optimization Algorithms via Footprints in Instance Space.

    PubMed

    Muñoz, Mario A; Smith-Miles, Kate A

    2017-01-01

    This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.

  9. Stable Tetraquarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigg, Chris

    For very heavy quarks, relations derived from heavy-quark symmetry imply novel narrow doubly heavy tetraquark states containing two heavy quarks and two light antiquarks. We predict that double-beauty states will be stable against strong decays, whereas the double-charm states and mixed beauty+charm states will dissociate into pairs of heavy-light mesons. Observing a new double-beauty state through its weak decays would establish the existence of tetraquarks and illuminate the role of heavy color-antitriplet diquarks as hadron constituents.

  10. Online Coregularization for Multiview Semisupervised Learning

    PubMed Central

    Li, Guohui; Huang, Kuihua

    2013-01-01

    We propose a novel online coregularization framework for multiview semisupervised learning based on the notion of duality in constrained optimization. Using the weak duality theorem, we reduce the online coregularization to the task of increasing the dual function. We demonstrate that the existing online coregularization algorithms in previous work can be viewed as an approximation of our dual ascending process using gradient ascent. New algorithms are derived based on the idea of ascending the dual function more aggressively. For practical purpose, we also propose two sparse approximation approaches for kernel representation to reduce the computational complexity. Experiments show that our derived online coregularization algorithms achieve risk and accuracy comparable to offline algorithms while consuming less time and memory. Specially, our online coregularization algorithms are able to deal with concept drift and maintain a much smaller error rate. This paper paves a way to the design and analysis of online coregularization algorithms. PMID:24194680

  11. An experimental comparison of online object-tracking algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan

    2011-09-01

    This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.

  12. Weak lensing probe of cubic Galileon model

    NASA Astrophysics Data System (ADS)

    Dinda, Bikash R.

    2018-06-01

    The cubic Galileon model containing the lowest non-trivial order action of the full Galileon action can produce the stable late-time cosmic acceleration. This model can have a significant role in the growth of structures. The signatures of the cubic Galileon model in the structure formation can be probed by the weak lensing statistics. Weak lensing convergence statistics is one of the strongest probes to the structure formation and hence it can probe the dark energy or modified theories of gravity models. In this work, we investigate the detectability of the cubic Galileon model from the ΛCDM model or from the canonical quintessence model through the convergence power spectrum and bi-spectrum.

  13. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  14. Chaos control of Hastings-Powell model by combining chaotic motions.

    PubMed

    Danca, Marius-F; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  15. Chaos control of Hastings-Powell model by combining chaotic motions

    NASA Astrophysics Data System (ADS)

    Danca, Marius-F.; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  16. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  17. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  18. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  19. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation.

    PubMed

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-11-25

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.

  20. Robust adaptive 3-D segmentation of vessel laminae from fluorescence confocal microscope images and parallel GPU implementation.

    PubMed

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath

    2010-03-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.

  1. A Study of Strong Stability of Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cataltepe, Tayfun

    1989-01-01

    The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.

  2. Energy Efficient and Stable Weight Based Clustering for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Bouk, Safdar H.; Sasase, Iwao

    Recently several weighted clustering algorithms have been proposed, however, to the best of our knowledge; there is none that propagates weights to other nodes without weight message for leader election, normalizes node parameters and considers neighboring node parameters to calculate node weights. In this paper, we propose an Energy Efficient and Stable Weight Based Clustering (EE-SWBC) algorithm that elects cluster heads without sending any additional weight message. It propagates node parameters to its neighbors through neighbor discovery message (HELLO Message) and stores these parameters in neighborhood list. Each node normalizes parameters and efficiently calculates its own weight and the weights of neighboring nodes from that neighborhood table using Grey Decision Method (GDM). GDM finds the ideal solution (best node parameters in neighborhood list) and calculates node weights in comparison to the ideal solution. The node(s) with maximum weight (parameters closer to the ideal solution) are elected as cluster heads. In result, EE-SWBC fairly selects potential nodes with parameters closer to ideal solution with less overhead. Different performance metrics of EE-SWBC and Distributed Weighted Clustering Algorithm (DWCA) are compared through simulations. The simulation results show that EE-SWBC maintains fewer average numbers of stable clusters with minimum overhead, less energy consumption and fewer changes in cluster structure within network compared to DWCA.

  3. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    PubMed

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  4. A Weak Quantum Blind Signature with Entanglement Permutation

    NASA Astrophysics Data System (ADS)

    Lou, Xiaoping; Chen, Zhigang; Guo, Ying

    2015-09-01

    Motivated by the permutation encryption algorithm, a weak quantum blind signature (QBS) scheme is proposed. It involves three participants, including the sender Alice, the signatory Bob and the trusted entity Charlie, in four phases, i.e., initializing phase, blinding phase, signing phase and verifying phase. In a small-scale quantum computation network, Alice blinds the message based on a quantum entanglement permutation encryption algorithm that embraces the chaotic position string. Bob signs the blinded message with private parameters shared beforehand while Charlie verifies the signature's validity and recovers the original message. Analysis shows that the proposed scheme achieves the secure blindness for the signer and traceability for the message owner with the aid of the authentic arbitrator who plays a crucial role when a dispute arises. In addition, the signature can neither be forged nor disavowed by the malicious attackers. It has a wide application to E-voting and E-payment system, etc.

  5. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  6. Application of based on improved wavelet algorithm in fiber temperature sensor

    NASA Astrophysics Data System (ADS)

    Qi, Hui; Tang, Wenjuan

    2018-03-01

    It is crucial point that accurate temperature in distributed optical fiber temperature sensor. In order to solve the problem of temperature measurement error due to weak Raman scattering signal and strong noise in system, a new based on improved wavelet algorithm is presented. On the basis of the traditional modulus maxima wavelet algorithm, signal correlation is considered to improve the ability to capture signals and noise, meanwhile, combined with wavelet decomposition scale adaptive method to eliminate signal loss or noise not filtered due to mismatch scale. Superiority of algorithm filtering is compared with others by Matlab. At last, the 3km distributed optical fiber temperature sensing system is used for verification. Experimental results show that accuracy of temperature generally increased by 0.5233.

  7. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  8. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  9. Density-matrix-based algorithm for solving eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Polizzi, Eric

    2009-03-01

    A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.

  10. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization.

    PubMed

    Jiang, Ailian; Zheng, Lihong

    2018-03-29

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.

  11. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization

    PubMed Central

    2018-01-01

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336

  12. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  13. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  14. Galerkin finite difference Laplacian operators on isolated unstructured triangular meshes by linear combinations

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.

    1990-01-01

    The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.

  15. Existence and discrete approximation for optimization problems governed by fractional differential equations

    NASA Astrophysics Data System (ADS)

    Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng

    2018-06-01

    We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.

  16. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  17. Characteristics of lentiviral vectors harboring the proximal promoter of the vav proto-oncogene: a weak and efficient promoter for gene therapy.

    PubMed

    Almarza, Elena; Río, Paula; Meza, Nestor W; Aldea, Montserrat; Agirre, Xabier; Guenechea, Guillermo; Segovia, José C; Bueren, Juan A

    2007-08-01

    Recent published data have shown the efficacy of gene therapy treatments of certain monogenic diseases. Risks of insertional oncogenesis, however, indicate the necessity of developing new vectors with weaker or cell-restricted promoters to minimize the trans-activation activity of integrated proviruses. We have inserted the proximal promoter of the vav proto-oncogene into self-inactivating lentiviral vectors (vav-LVs) and investigated the expression pattern and therapeutic efficacy of these vectors. Compared with other LVs frequently used in gene therapy, vav-LVs mediated a weak, though homogeneous and stable, expression in in vitro-cultured cells. Transplantation experiments using transduced mouse bone marrow and human CD34(+) cells confirmed the stable activity of the promoter in vivo. To investigate whether the weak activity of this promoter was compatible with a therapeutic effect, a LV expressing the Fanconi anemia A (FANCA) gene was constructed (vav-FANCA LV). Although this vector induced a low expression of FANCA, compared to the expression induced by a LV harboring the spleen focus-forming virus (SFFV) promoter, the two vectors corrected the phenotype of cells from a patient with FA-A with the same efficacy. We propose that self-inactivating vectors harboring weak promoters, such as the vav promoter, will improve the safety of gene therapy and will be of particular interest for the treatment of diseases where a high expression of the transgene is not required.

  18. Boundary layer height determination from Lidar for improving air pollution episode modelling: development of new algorithm and evaluation

    NASA Astrophysics Data System (ADS)

    Yang, T.; Wang, Z.; Zhang, W.; Gbaguidi, A.; Sugimoto, N.; Matsui, I.; Wang, X.; Yele, S.

    2017-12-01

    Predicting air pollution events in low atmosphere over megacities requires thorough understanding of the tropospheric dynamic and chemical processes, involving notably, continuous and accurate determination of the boundary layer height (BLH). Through intensive observations experimented over Beijing (China), and an exhaustive evaluation existing algorithms applied to the BLH determination, persistent critical limitations are noticed, in particular over polluted episodes. Basically, under weak thermal convection with high aerosol loading, none of the retrieval algorithms is able to fully capture the diurnal cycle of the BLH due to pollutant insufficient vertical mixing in the boundary layer associated with the impact of gravity waves on the tropospheric structure. Subsequently, a new approach based on gravity wave theory (the cubic root gradient method: CRGM), is developed to overcome such weakness and accurately reproduce the fluctuations of the BLH under various atmospheric pollution conditions. Comprehensive evaluation of CRGM highlights its high performance in determining BLH from Lidar. In comparison with the existing retrieval algorithms, the CRGM potentially reduces related computational uncertainties and errors from BLH determination (strong increase of correlation coefficient from 0.44 to 0.91 and significant decreases of the root mean square error from 643 m to 142 m). Such newly developed technique is undoubtedly expected to contribute to improve the accuracy of air quality modelling and forecasting systems.

  19. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  20. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  1. Stability of cosmological detonation fronts

    NASA Astrophysics Data System (ADS)

    Mégevand, Ariel; Membiela, Federico Agustín

    2014-05-01

    The steady-state propagation of a phase-transition front is classified, according to hydrodynamics, as a deflagration or a detonation, depending on its velocity with respect to the fluid. These propagation modes are further divided into three types, namely, weak, Jouguet, and strong solutions, according to their disturbance of the fluid. However, some of these hydrodynamic modes will not be realized in a phase transition. One particular cause is the presence of instabilities. In this work we study the linear stability of weak detonations, which are generally believed to be stable. After discussing in detail the weak detonation solution, we consider small perturbations of the interface and the fluid configuration. When the balance between the driving and friction forces is taken into account, it turns out that there are actually two different kinds of weak detonations, which behave very differently as functions of the parameters. We show that the branch of stronger weak detonations are unstable, except very close to the Jouguet point, where our approach breaks down.

  2. Cryo-EM of dynamic protein complexes in eukaryotic DNA replication.

    PubMed

    Sun, Jingchuan; Yuan, Zuanning; Bai, Lin; Li, Huilin

    2017-01-01

    DNA replication in Eukaryotes is a highly dynamic process that involves several dozens of proteins. Some of these proteins form stable complexes that are amenable to high-resolution structure determination by cryo-EM, thanks to the recent advent of the direct electron detector and powerful image analysis algorithm. But many of these proteins associate only transiently and flexibly, precluding traditional biochemical purification. We found that direct mixing of the component proteins followed by 2D and 3D image sorting can capture some very weakly interacting complexes. Even at 2D average level and at low resolution, EM images of these flexible complexes can provide important biological insights. It is often necessary to positively identify the feature-of-interest in a low resolution EM structure. We found that systematically fusing or inserting maltose binding protein (MBP) to selected proteins is highly effective in these situations. In this chapter, we describe the EM studies of several protein complexes involved in the eukaryotic DNA replication over the past decade or so. We suggest that some of the approaches used in these studies may be applicable to structural analysis of other biological systems. © 2016 The Protein Society.

  3. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  4. Bayesian Markov Chain Monte Carlo inversion for weak anisotropy parameters and fracture weaknesses using azimuthal elastic impedance

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi

    2017-08-01

    A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.

  5. Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1998-01-01

    Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.

  6. Research on the Automatic Fusion Strategy of Fixed Value Boundary Based on the Weak Coupling Condition of Grid Partition

    NASA Astrophysics Data System (ADS)

    Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.

    2018-03-01

    With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.

  7. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  8. DOA Estimation for Underwater Wideband Weak Targets Based on Coherent Signal Subspace and Compressed Sensing.

    PubMed

    Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting

    2018-03-18

    Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.

  9. Interferometric tomography of continuous fields with incomplete projections

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Sun, Hogwei

    1988-01-01

    Interferometric tomography in the presence of an opaque object is investigated. The developed iterative algorithm does not need to augment the missing information. It is based on the successive reconstruction of the difference field, the difference between the object field to be reconstructed and its estimate, only in the difined region. The application of the algorithm results in stable convergence.

  10. Optimization-based Approach to Cross-layer Resource Management in Wireless Networked Control Systems

    DTIC Science & Technology

    2013-05-01

    interest from both academia and industry [37], finding applications in un- manned robotic vehicles, automated highways and factories, smart homes and...is stable when the scaler varies slowly. The algorithm is further extended to utilize the slack resource in the network, which leads to the...model . . . . . . . . . . . . . . . . 66 Optimal sampling rate allocation formulation . . . . . 67 Price-based algorithm

  11. Algorithm-Dependent Generalization Bounds for Multi-Task Learning.

    PubMed

    Liu, Tongliang; Tao, Dacheng; Song, Mingli; Maybank, Stephen J

    2017-02-01

    Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1/n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.

  12. Distinguishing Motor Weakness From Impaired Spatial Awareness: A Helping Hand!

    PubMed

    Raju, Suneil A; Swift, Charles R; Bardhan, Karna Dev

    2017-01-01

    Our patient, aged 73 years, had background peripheral neuropathy of unknown cause, stable for several years, which caused some difficulty in walking on uneven ground. He attended for a teaching session but now staggered in, a new development. He had apparent weakness of his right arm, but there was difficulty in distinguishing motor weakness from impaired spatial awareness suggestive of parietal lobe dysfunction. With the patient seated, eyes closed, and left arm outstretched, S.A.R. lifted the patient's right arm and asked him to indicate when both were level. This confirmed motor weakness. Urgent computed tomographic scan confirmed left subdural haematoma and its urgent evacuation rapidly resolved the patient's symptoms. Intrigued by our patient's case, we explored further and learnt that in rehabilitation medicine, the awareness of limb position is commonly viewed in terms of joint position sense. We present recent literature evidence indicating that the underlying mechanisms are more subtle.

  13. Distinguishing Motor Weakness From Impaired Spatial Awareness: A Helping Hand!

    PubMed Central

    Raju, Suneil A; Swift, Charles R; Bardhan, Karna Dev

    2017-01-01

    Our patient, aged 73 years, had background peripheral neuropathy of unknown cause, stable for several years, which caused some difficulty in walking on uneven ground. He attended for a teaching session but now staggered in, a new development. He had apparent weakness of his right arm, but there was difficulty in distinguishing motor weakness from impaired spatial awareness suggestive of parietal lobe dysfunction. With the patient seated, eyes closed, and left arm outstretched, S.A.R. lifted the patient’s right arm and asked him to indicate when both were level. This confirmed motor weakness. Urgent computed tomographic scan confirmed left subdural haematoma and its urgent evacuation rapidly resolved the patient’s symptoms. Intrigued by our patient’s case, we explored further and learnt that in rehabilitation medicine, the awareness of limb position is commonly viewed in terms of joint position sense. We present recent literature evidence indicating that the underlying mechanisms are more subtle. PMID:28579860

  14. Ambulance Clinical Triage for Acute Stroke Treatment: Paramedic Triage Algorithm for Large Vessel Occlusion.

    PubMed

    Zhao, Henry; Pesavento, Lauren; Coote, Skye; Rodrigues, Edrich; Salvaris, Patrick; Smith, Karen; Bernard, Stephen; Stephenson, Michael; Churilov, Leonid; Yassi, Nawaf; Davis, Stephen M; Campbell, Bruce C V

    2018-04-01

    Clinical triage scales for prehospital recognition of large vessel occlusion (LVO) are limited by low specificity when applied by paramedics. We created the 3-step ambulance clinical triage for acute stroke treatment (ACT-FAST) as the first algorithmic LVO identification tool, designed to improve specificity by recognizing only severe clinical syndromes and optimizing paramedic usability and reliability. The ACT-FAST algorithm consists of (1) unilateral arm drift to stretcher <10 seconds, (2) severe language deficit (if right arm is weak) or gaze deviation/hemineglect assessed by simple shoulder tap test (if left arm is weak), and (3) eligibility and stroke mimic screen. ACT-FAST examination steps were retrospectively validated, and then prospectively validated by paramedics transporting culturally and linguistically diverse patients with suspected stroke in the emergency department, for the identification of internal carotid or proximal middle cerebral artery occlusion. The diagnostic performance of the full ACT-FAST algorithm was then validated for patients accepted for thrombectomy. In retrospective (n=565) and prospective paramedic (n=104) validation, ACT-FAST displayed higher overall accuracy and specificity, when compared with existing LVO triage scales. Agreement of ACT-FAST between paramedics and doctors was excellent (κ=0.91; 95% confidence interval, 0.79-1.0). The full ACT-FAST algorithm (n=60) assessed by paramedics showed high overall accuracy (91.7%), sensitivity (85.7%), specificity (93.5%), and positive predictive value (80%) for recognition of endovascular-eligible LVO. The 3-step ACT-FAST algorithm shows higher specificity and reliability than existing scales for clinical LVO recognition, despite requiring just 2 examination steps. The inclusion of an eligibility step allowed recognition of endovascular-eligible patients with high accuracy. Using a sequential algorithmic approach eliminates scoring confusion and reduces assessment time. Future studies will test whether field application of ACT-FAST by paramedics to bypass suspected patients with LVO directly to endovascular-capable centers can reduce delays to endovascular thrombectomy. © 2018 American Heart Association, Inc.

  15. Weak Perturbations of Biochemical Oscillators

    NASA Astrophysics Data System (ADS)

    Gailey, Paul

    2001-03-01

    Biochemical oscillators may play important roles in gene regulation, circadian rhythms, physiological signaling, and sensory processes. These oscillations typically occur inside cells where the small numbers of reacting molecules result in fluctuations in the oscillation period. Some oscillation mechanisms have been reported that resist fluctuations and produce more stable oscillations. In this paper, we consider the use of biochemical oscillators as sensors by comparing inherent fluctuations with the effects of weak perturbations to one of the reactants. Such systems could be used to produce graded responses to weak stimuli. For example, a leading hypothesis to explain geomagnetic navigation in migrating birds and other animals is based on magnetochemical reactions. Because the magnitude of magnetochemical effects is small at geomagnetic field strengths, a sensitive, noise resistant detection scheme would be required.

  16. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  17. Algorithm Engineering: Concepts and Practice

    NASA Astrophysics Data System (ADS)

    Chimani, Markus; Klein, Karsten

    Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.

  18. Cross-wind profiling based on the scattered wave scintillation in a telescope focus.

    PubMed

    Banakh, V A; Marakasov, D A; Vorontsov, M A

    2007-11-20

    The problem of wind profile reconstruction from scintillation of an optical wave scattered off a rough surface in a telescope focus plane is considered. Both the expression for the spatiotemporal correlation function and the algorithm of cross-wind velocity and direction profiles reconstruction based on the spatiotemporal spectrum of intensity of an optical wave scattered by a diffuse target in a turbulent atmosphere are presented. Computer simulations performed under conditions of weak optical turbulence show wind profiles reconstruction by the developed algorithm.

  19. Survey of Quantification and Distance Functions Used for Internet-based Weak-link Sociological Phenomena

    DTIC Science & Technology

    2016-03-01

    well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology literature for the relevant...Google matrix, PageRank as well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology...The PI studied all mathematical literature he can find related to the Google search engine, Google matrix, PageRank as well as the Yahoo search

  20. Full cycle rapid scan EPR deconvolution algorithm.

    PubMed

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Full cycle rapid scan EPR deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.

  2. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation

    PubMed Central

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-01-01

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761

  3. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  4. Pulsed polarization spectroscopy with strong fields and an optically thick sample

    NASA Astrophysics Data System (ADS)

    Spano, Frank C.; Lehmann, Kevin K.

    1992-06-01

    The theory of pulsed polarization spectroscopy in the case of a saturating pump pulse and an optically thick sample is presented, both with and without inhomogeneous broadening. It is found that the molecular anisotropy produced by pumping an R- or P-branch transition is maximized by using a pulse whose flip angle is near 2π for the M component with the largest Rabi frequency. Calculations with no or extreme inhomogeneous broadening differ insignificantly. Such a pump pulse produces an anisotropy (and thus polarization rotation of the probe beam) of the opposite sign of that produced by weak-field excitation. Pulse-propagation calculations obtained by numerically solving the coupled Maxwell-Bloch equations demonstrate that there exist ``stable-area'' pulses, much like for a two-level system. The lowest such stable pulse produces a flip angle of 2.21π for the M=0 level and produces close to the maximum polarization anisotropy. This pulse still weakly excites the sample, and thus lengthens as it propagates to conserve area. The effective absorption coefficient, however, is much less than that for weak pulses. It is expected that such pulses should provide an order of magnitude or more sensitivity for polarization spectroscopy than that obtained with nonsaturating pulses.

  5. Weak lasing in one-dimensional polariton superlattices

    PubMed Central

    Zhang, Long; Xie, Wei; Wang, Jian; Poddubny, Alexander; Lu, Jian; Wang, Yinglei; Gu, Jie; Liu, Wenhui; Xu, Dan; Shen, Xuechu; Rubo, Yuri G.; Altshuler, Boris L.; Kavokin, Alexey V.; Chen, Zhanghai

    2015-01-01

    Bosons with finite lifetime exhibit condensation and lasing when their influx exceeds the lasing threshold determined by the dissipative losses. In general, different one-particle states decay differently, and the bosons are usually assumed to condense in the state with the longest lifetime. Interaction between the bosons partially neglected by such an assumption can smear the lasing threshold into a threshold domain—a stable lasing many-body state exists within certain intervals of the bosonic influxes. This recently described weak lasing regime is formed by the spontaneously symmetry breaking and phase-locking self-organization of bosonic modes, which results in an essentially many-body state with a stable balance between gains and losses. Here we report, to our knowledge, the first observation of the weak lasing phase in a one-dimensional condensate of exciton–polaritons subject to a periodic potential. Real and reciprocal space photoluminescence images demonstrate that the spatial period of the condensate is twice as large as the period of the underlying periodic potential. These experiments are realized at room temperature in a ZnO microwire deposited on a silicon grating. The period doubling takes place at a critical pumping power, whereas at a lower power polariton emission images have the same periodicity as the grating. PMID:25787253

  6. Matrix Isolation and ab initio study of the noncovalent complexes between formamide and acetylene.

    PubMed

    Mardyukov, Artur; Sánchez-García, Elsa; Sander, Wolfram

    2009-02-12

    Matrix isolation spectroscopy in combination with ab initio calculations is a powerful technique for the identification of weakly bound intermolecular complexes. Here, weak complexes between formamide and acetylene are studied, and three 1:1 complexes with binding energies of -2.96, -2.46, and -1.79 kcal/mol have been found at the MP2 level of theory (MP2/cc-pVTZ + ZPE + BSSE). The two most stable dimers A and B are identified in argon and nitrogen matrices by comparison between the experimental and calculated infrared frequencies. Both complexes are stabilized by the formamide C=O...HC acetylene and H...pi interactions. Large shifts have been observed experimentally for the C-H stretching vibrations of the acetylene molecule, in very good agreement with the calculated values. Eight 1:2 FMA-acetylene trimers (T-A to T-H) with binding energies between -5.44 and -2.62 kcal/mol (MP2/aug-cc-pVDZ + ZPE + BSSE) were calculated. The two most stable trimers T-A and T-B are very close in energy and have similar infrared spectra. Several weak bands that are in agreement with the calculated frequencies of the trimers T-A and T-B are observed under matrix isolation conditions. However, the differences are too small for a definitive assignment.

  7. An improved stochastic fractal search algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun

    2018-05-03

    Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.

  8. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  9. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  10. Fuzzy Sarsa with Focussed Replacing Eligibility Traces for Robust and Accurate Control

    NASA Astrophysics Data System (ADS)

    Kamdem, Sylvain; Ohki, Hidehiro; Sueda, Naomichi

    Several methods of reinforcement learning in continuous state and action spaces that utilize fuzzy logic have been proposed in recent years. This paper introduces Fuzzy Sarsa(λ), an on-policy algorithm for fuzzy learning that relies on a novel way of computing replacing eligibility traces to accelerate the policy evaluation. It is tested against several temporal difference learning algorithms: Sarsa(λ), Fuzzy Q(λ), an earlier fuzzy version of Sarsa and an actor-critic algorithm. We perform detailed evaluations on two benchmark problems : a maze domain and the cart pole. Results of various tests highlight the strengths and weaknesses of these algorithms and show that Fuzzy Sarsa(λ) outperforms all other algorithms tested for a larger granularity of design and under noisy conditions. It is a highly competitive method of learning in realistic noisy domains where a denser fuzzy design over the state space is needed for a more precise control.

  11. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  12. A surface hopping algorithm for nonadiabatic minimum energy path calculations.

    PubMed

    Schapiro, Igor; Roca-Sanjuán, Daniel; Lindh, Roland; Olivucci, Massimo

    2015-02-15

    The article introduces a robust algorithm for the computation of minimum energy paths transiting along regions of near-to or degeneracy of adiabatic states. The method facilitates studies of excited state reactivity involving weakly avoided crossings and conical intersections. Based on the analysis of the change in the multiconfigurational wave function the algorithm takes the decision whether the optimization should continue following the same electronic state or switch to a different state. This algorithm helps to overcome convergence difficulties near degeneracies. The implementation in the MOLCAS quantum chemistry package is discussed. To demonstrate the utility of the proposed procedure four examples of application are provided: thymine, asulam, 1,2-dioxetane, and a three-double-bond model of the 11-cis-retinal protonated Schiff base. © 2015 Wiley Periodicals, Inc.

  13. Chaos control of Hastings–Powell model by combining chaotic motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in

    2016-04-15

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less

  14. The weak force and SETH: The search for Extra-Terrestrial Homochirality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDermott, A.J.

    We propose that a search for extra-terrestrial life can be approached as a Search for Extra-Terrestrial Homochirality{emdash}SETH. Homochirality is probably a pre-condition for life, so a chiral influence may be required to get life started. We explain how the weak force mediated by the {ital Z}{sup 0} boson gives rise to a small parity-violating energy difference (PVED) between enantiomers, and discuss how the resulting small excess of the more stable enantiomer may be amplified to homochirality. Titan and comets are good places to test for emerging pre-biotic homochirality, while on Mars there may be traces of homochirality as a relicmore » of extinct life. Our calculations of the PVED show that the natural L-amino acids are indeed more stable than their enantiomers, as are several key D-sugars and right-hand helical DNA. Thiosubstituted DNA analogues show particularly large PVEDs. L-quartz is also more stable than D-quartz, and we believe that further crystal counts should be carried out to establish whether reported excesses of L quartz are real. Finding extra-terrestrial molecules of the same hand as on Earth would lend support to the universal chiral influence of the weak force. We describe a novel miniaturized space polarimeter, called the SETH Cigar, which we hope to use to detect optical rotation on other planets. Moving parts are avoided by replacing the normal rotating polarizer by multiple fixed polarizers at different angles as in the eye of the bee. Even if we do not find the same hand as on Earth, finding extra-terrestrial optical rotation would be of enormous importance as it would still be the homochiral signature of life. {copyright} {ital 1996 American Institute of Physics.}« less

  15. The weak force and SETH: The search for Extra-Terrestrial Homochirality

    NASA Astrophysics Data System (ADS)

    MacDermott, Alexandra J.

    1996-07-01

    We propose that a search for extra-terrestrial life can be approached as a Search for Extra-Terrestrial Homochirality-SETH. Homochirality is probably a pre-condition for life, so a chiral influence may be required to get life started. We explain how the weak force mediated by the Z0 boson gives rise to a small parity-violating energy difference (PVED) between enantiomers, and discuss how the resulting small excess of the more stable enantiomer may be amplified to homochirality. Titan and comets are good places to test for emerging pre-biotic homochirality, while on Mars there may be traces of homochirality as a relic of extinct life. Our calculations of the PVED show that the natural L-amino acids are indeed more stable than their enantiomers, as are several key D-sugars and right-hand helical DNA. Thiosubstituted DNA analogues show particularly large PVEDs. L-quartz is also more stable than D-quartz, and we believe that further crystal counts should be carried out to establish whether reported excesses of L quartz are real. Finding extra-terrestrial molecules of the same hand as on Earth would lend support to the universal chiral influence of the weak force. We describe a novel miniaturized space polarimeter, called the SETH Cigar, which we hope to use to detect optical rotation on other planets. Moving parts are avoided by replacing the normal rotating polarizer by multiple fixed polarizers at different angles as in the eye of the bee. Even if we do not find the same hand as on Earth, finding extra-terrestrial optical rotation would be of enormous importance as it would still be the homochiral signature of life.

  16. Single bumps in a 2-population homogenized neuronal network model

    NASA Astrophysics Data System (ADS)

    Kolodina, Karina; Oleynik, Anna; Wyller, John

    2018-05-01

    We investigate existence and stability of single bumps in a homogenized 2-population neural field model, when the firing rate functions are given by the Heaviside function. The model is derived by means of the two-scale convergence technique of Nguetseng in the case of periodic microvariation in the connectivity functions. The connectivity functions are periodically modulated in both the synaptic footprint and in the spatial scale. The bump solutions are constructed by using a pinning function technique for the case where the solutions are independent of the local variable. In the weakly modulated case the generic picture consists of two bumps (one narrow and one broad bump) for each admissible set of threshold values for firing. In addition, a new threshold value regime for existence of bumps is detected. Beyond the weakly modulated regime the number of bumps depends sensitively on the degree of heterogeneity. For the latter case we present a configuration consisting of three coexisting bumps. The linear stability of the bumps is studied by means of the spectral properties of a Fredholm integral operator, block diagonalization of this operator and the Fourier decomposition method. In the weakly modulated regime, one of the bumps is unstable for all relative inhibition times, while the other one is stable for small and moderate values of this parameter. The latter bump becomes unstable as the relative inhibition time exceeds a certain threshold. In the case of the three coexisting bumps detected in the regime of finite degree of heterogeneity, we have at least one stable bump (and maximum two stable bumps) for small and moderate values of the relative inhibition time.

  17. Life stage and species identity affect whether habitat subsidies enhance or simply redistribute consumer biomass.

    PubMed

    Keller, Danielle A; Gittman, Rachel K; Bouchillon, Rachel K; Fodrie, F Joel

    2017-10-01

    Quantifying the response of mobile consumers to changes in habitat availability is essential for determining the degree to which population-level productivity is habitat limited rather than regulated by other, potentially density-independent factors. Over landscape scales, this can be explored by monitoring changes in density and foraging as habitat availability varies. As habitat availability increases, densities may: (1) decrease (unit-area production decreases; weak habitat limitation); (2) remain stable (unit-area production remains stable; habitat limitation) or (3) increase (unit-area production increases; strong habitat limitation). We tested the response of mobile estuarine consumers over 5 months to changes in habitat availability in situ by comparing densities and feeding rates on artificial reefs that were or were not adjacent to neighbouring artificial reefs or nearby natural reefs. Using either constructed or natural reefs to manipulate habitat availability, we documented threefold density decreases among juvenile stone crabs as habitat increased (i.e. weak habitat imitation). However, for adult stone crabs, density remained stable across treatments, demonstrating that habitat limitation presents a bottleneck in this species' later life history. Oyster toadfish densities also did not change with increasing habitat availability (i.e. habitat limitation), but densities of other cryptic fishes decreased as habitat availability increased (i.e. weak limitation). Feeding and abundance data suggested that some mobile fishes experience habitat limitation, or, potentially in one case, strong limitation across our habitat manipulations. These findings of significant, community-level habitat limitation provide insight into how global declines in structurally complex estuarine habitats may have reduced the fishery production of coastal ecosystems. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  18. Transfer and distortion of atmospheric information in the satellite temperature retrieval problem

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1981-01-01

    A systematic approach to investigating the transfer of basic ambient temperature information and its distortion by satellite systems and subsequent analysis algorithms is discussed. The retrieval analysis cycle is derived, the variance spectrum of information is examined as it takes different forms in that process, and the quality and quantity of information existing at each stop is compared with the initial ambient temperature information. Temperature retrieval algorithms can smooth, add, or further distort information, depending on how stable the algorithm is, and how heavily influenced by a priori data.

  19. Forecasting Nonlinear Chaotic Time Series with Function Expression Method Based on an Improved Genetic-Simulated Annealing Algorithm

    PubMed Central

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011

  20. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  1. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  2. A 2D eye gaze estimation system with low-resolution webcam images

    NASA Astrophysics Data System (ADS)

    Ince, Ibrahim Furkan; Kim, Jin Woo

    2011-12-01

    In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  3. Lunar Paleomagnetism: The Case for an Ancient Lunar Dynamo. (Invited)

    NASA Astrophysics Data System (ADS)

    Fuller, M.; Weiss, B. P.; Gattacceca, J.

    2010-12-01

    The failure of lunar samples to satisfy minimal criteria for classical paleointensity determinations has led to skepticism of the case for an ancient lunar dynamo. There are however practical and fundamental reasons why such experiments are doomed to failure in most lunar samples. In such methods, NRMs in successive blocking temperatures ranges are thermally demagnetized and replaced with partial thermoremanent magnetization (pTRMs) given in a known field (Thellier, 1938). A practical difficulty is that it is hard to heat lunar samples without altering them. A fundamental problem is that whereas pottery, for which these methods were designed, carries a primary (TRM) from its initial cooling and little secondary magnetization, lunar samples are likely to carry weak field isothermal remanent magnetization (IRM) and shock remanent magnetization (SRM) as secondary overprints. Thermal demagnetization does not isolate weak field IRM well. For example, on thermal demagnetization of the Apollo sample 14053.48 carrying a 2000nT TRM with a superposed 5mT IRM, the IRM persists to the Curie point obscuring the TRM. Fortunately, weak field IRM is removed by AF demagnetization to fields comparable to that in which it is acquired. Furthermore, Gattacceca et al. (2008) demonstrated that experimentally generated SRM from several GPa, like weak field IRM, is demagnetized by AF fields of between ~20 and 30 mT, leaving the pre-shock remanent magnetization essentially untouched. This agrees with our theoretical understanding of SRM, which at pressures below approximately the Hugoniot elastic limit (several GPa for most rocks) should essentially be a pressure remanent magnetization (e.g., Dunlop and Ozdemir, 1997). Unlike IRM, SRM in the range of a few GPa may carry recoverable lunar field records (Gattacceca et al., 2008). NRM in samples shocked to less than ~5 GPa, which is stable against AF demagnetization beyond the fields necessary to eliminate weak SRM (~20-30 mT), requires some other explanation. Such NRM carried by the small amount of single domain iron and iron nickel present in the samples can be very stable. The troctolite 76535 is an example of such a sample. It cooled over thousands of years, or longer, which is far too long for any possible transient fields associated with impacts and must carry a TRM like NRM. Note that despite predictions that even km sized craters may generate fields up to 0.1T at 1 crater radius, no unambiguous evidence for paleomagnetic recording of such fields over individual craters has materialized. There are numerous other candidate samples having experienced <~5 GPa carrying stable NRM, which have been analyzed, or are being presently investigated. The only other obvious source of a field to explain stable TRM in lunar rocks is that of surface lunar fields, but over the mare these are too weak to account for the NRM of mare basalts. In summary, recent advances in our understanding of SRM and reanalysis of lunar paleomagnetism lead us to conclude that lunar paleomagnetism is most easily explained by a lunar dynamo.

  4. Energy stable and high-order-accurate finite difference methods on staggered grids

    NASA Astrophysics Data System (ADS)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  5. Exploring the spectrum of planar AdS4 /CFT3 at finite coupling

    NASA Astrophysics Data System (ADS)

    Bombardelli, Diego; Cavaglià, Andrea; Conti, Riccardo; Tateo, Roberto

    2018-04-01

    The Quantum Spectral Curve (QSC) equations for planar N=6 super-conformal Chern-Simons (SCS) are solved numerically at finite values of the coupling constant for states in the sl(2\\Big|1) sector. New weak coupling results for conformal dimensions of operators outside the sl(2) -like sector are obtained by adapting a recently proposed algorithm for the QSC perturbative solution. Besides being interesting in their own right, these perturbative results are necessary initial inputs for the numerical algorithm to converge on the correct solution. The non-perturbative numerical outcomes nicely interpolate between the weak coupling and the known semiclassical expansions, and novel strong coupling exact results are deduced from the numerics. Finally, the existence of contour crossing singularities in the TBA equations for the operator 20 is ruled out by our analysis. The results of this paper are an important test of the QSC formalism for this model, open the way to new quantitative studies and provide further evidence in favour of the conjectured weak/strong coupling duality between N=6 SCS and type IIA superstring theory on AdS4 × CP 3. Attached to the arXiv submission, a Mathematica implementation of the numerical method and ancillary files containing the numerical results are provided.

  6. Evaluation of the strength of cement-treated aggregate for pavement bases.

    DOT National Transportation Integrated Search

    2006-01-01

    Cement-treated aggregate (CTA) is commonly used to provide a stable base for pavements that are placed over weak soil subgrades. Because CTA reduces the thickness of the aggregate required to provide a durable base by approximately one-half, using it...

  7. Concentrated dispersions of equilibrium protein nanoclusters that reversibly dissociate into active monomers

    NASA Astrophysics Data System (ADS)

    Truskett, Thomas M.; Johnston, Keith; Maynard, Jennifer; Borwankar, Ameya; Miller, Maria; Wilson, Brian; Dinin, Aileen; Khan, Tarik; Kaczorowski, Kevin

    2012-02-01

    Stabilizing concentrated protein solutions is of wide interest in drug delivery. However, a major challenge is how to reliably formulate concentrated, low viscosity (i.e., syringeable) solutions of biologically active proteins. Unfortunately, proteins typically undergo irreversible aggregation at intermediate concentrations of 100-200 mg/ml. In this talk, I describe how they can effectively avoid these intermediate concentrations by reversibly assembling into nanoclusters. Nanocluster assembly is achieved by balancing short-ranged, cosolute-induced attractions with weak, longer-ranger electrostatic repulsions near the isoelectric point. Theory predicts that native proteins are stabilized by a self-crowding mechanism within the concentrated environment of the nanoclusters, while weak cluster-cluster interactions can result in colloidally-stable dispersions with moderate viscosities. I present experimental results where this strategy is used to create concentrated antibody dispersions (up to 260 mg/ml) comprising nanoclusters of proteins [monoclonal antibody 1B7, polyclonal sheep Immunoglobin G and bovine serum albumin], which upon dilution in vitro or administration in vivo, are conformationally stable and retain activity.

  8. Subsurface Characterization using Geophysical Seismic Refraction Survey for Slope Stabilization Design with Soil Nailing

    NASA Astrophysics Data System (ADS)

    Ashraf Mohamad Ismail, Mohd; Ng, Soon Min; Hazreek Zainal Abidin, Mohd; Madun, Aziman

    2018-04-01

    The application of geophysical seismic refraction for slope stabilization design using soil nailing method was demonstrated in this study. The potential weak layer of the study area is first identify prior to determining the appropriate length and location of the soil nail. A total of 7 seismic refraction survey lines were conducted at the study area with standard procedures. The refraction data were then analyzed by using the Pickwin and Plotrefa computer software package to obtain the seismic velocity profiles distribution. These results were correlated with the complementary borehole data to interpret the subsurface profile of the study area. It has been identified that layer 1 to 3 is the potential weak zone susceptible to slope failure. Hence, soil nails should be installed to transfer the tensile load from the less stable layer 3 to the more stable layer 4. The soil-nail interaction will provide a reinforcing action to the soil mass thereby increasing the stability of the slope.

  9. Evaluating the Energetic Driving Force for Cocrystal Formation.

    PubMed

    Taylor, Christopher R; Day, Graeme M

    2018-02-07

    We present a periodic density functional theory study of the stability of 350 organic cocrystals relative to their pure single-component structures, the largest study of cocrystals yet performed with high-level computational methods. Our calculations demonstrate that cocrystals are on average 8 kJ mol -1 more stable than their constituent single-component structures and are very rarely (<5% of cases) less stable; cocrystallization is almost always a thermodynamically favorable process. We consider the variation in stability between different categories of systems-hydrogen-bonded, halogen-bonded, and weakly bound cocrystals-finding that, contrary to chemical intuition, the presence of hydrogen or halogen bond interactions is not necessarily a good predictor of stability. Finally, we investigate the correlation of the relative stability with simple chemical descriptors: changes in packing efficiency and hydrogen bond strength. We find some broad qualitative agreement with chemical intuition-more densely packed cocrystals with stronger hydrogen bonding tend to be more stable-but the relationship is weak, suggesting that such simple descriptors do not capture the complex balance of interactions driving cocrystallization. Our conclusions suggest that while cocrystallization is often a thermodynamically favorable process, it remains difficult to formulate general rules to guide synthesis, highlighting the continued importance of high-level computation in predicting and rationalizing such systems.

  10. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    PubMed

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  12. Automated segmentation of comet assay images using Gaussian filtering and fuzzy clustering.

    PubMed

    Sansone, Mario; Zeni, Olga; Esposito, Giovanni

    2012-05-01

    Comet assay is one of the most popular tests for the detection of DNA damage at single cell level. In this study, an algorithm for comet assay analysis has been proposed, aiming to minimize user interaction and providing reproducible measurements. The algorithm comprises two-steps: (a) comet identification via Gaussian pre-filtering and morphological operators; (b) comet segmentation via fuzzy clustering. The algorithm has been evaluated using comet images from human leukocytes treated with a commonly used DNA damaging agent. A comparison of the proposed approach with a commercial system has been performed. Results show that fuzzy segmentation can increase overall sensitivity, giving benefits in bio-monitoring studies where weak genotoxic effects are expected.

  13. Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.

    PubMed

    Banakh, Victor A; Marakasov, Dimitrii A

    2007-10-01

    Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.

  14. On recent advances and future research directions for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.; Manhardt, P. D.

    1986-01-01

    This paper highlights some recent accomplishments regarding CFD numerical algorithm constructions for generation of discrete approximate solutions to classes of Reynolds-averaged Navier-Stokes equations. Following an overview of turbulent closure modeling, and development of appropriate conservation law systems, a Taylor weak-statement semi-discrete approximate solution algorithm is developed. Various forms for completion to the final linear algebra statement are cited, as are a range of candidate numerical linear algebra solution procedures. This development sequence emphasizes the key building blocks of a CFD RNS algorithm, including solution trial and test spaces, integration procedure and added numerical stability mechanisms. A range of numerical results are discussed focusing on key topics guiding future research directions.

  15. [Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].

    PubMed

    Long, Junbo; Wang, Haibin; Zha, Daifeng

    2015-04-01

    The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.

  16. Reference gene selection for quantitative real-time PCR in Solanum lycopersicum L. inoculated with the mycorrhizal fungus Rhizophagus irregularis.

    PubMed

    Fuentes, Alejandra; Ortiz, Javier; Saavedra, Nicolás; Salazar, Luis A; Meneses, Claudio; Arriagada, Cesar

    2016-04-01

    The gene expression stability of candidate reference genes in the roots and leaves of Solanum lycopersicum inoculated with arbuscular mycorrhizal fungi was investigated. Eight candidate reference genes including elongation factor 1 α (EF1), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphoglycerate kinase (PGK), protein phosphatase 2A (PP2Acs), ribosomal protein L2 (RPL2), β-tubulin (TUB), ubiquitin (UBI) and actin (ACT) were selected, and their expression stability was assessed to determine the most stable internal reference for quantitative PCR normalization in S. lycopersicum inoculated with the arbuscular mycorrhizal fungus Rhizophagus irregularis. The stability of each gene was analysed in leaves and roots together and separated using the geNorm and NormFinder algorithms. Differences were detected between leaves and roots, varying among the best-ranked genes depending on the algorithm used and the tissue analysed. PGK, TUB and EF1 genes showed higher stability in roots, while EF1 and UBI had higher stability in leaves. Statistical algorithms indicated that the GAPDH gene was the least stable under the experimental conditions assayed. Then, we analysed the expression levels of the LePT4 gene, a phosphate transporter whose expression is induced by fungal colonization in host plant roots. No differences were observed when the most stable genes were used as reference genes. However, when GAPDH was used as the reference gene, we observed an overestimation of LePT4 expression. In summary, our results revealed that candidate reference genes present variable stability in S. lycopersicum arbuscular mycorrhizal symbiosis depending on the algorithm and tissue analysed. Thus, reference gene selection is an important issue for obtaining reliable results in gene expression quantification. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  17. Evolution with Reinforcement Learning in Negotiation

    PubMed Central

    Zou, Yi; Zhan, Wenjie; Shao, Yuan

    2014-01-01

    Adaptive behavior depends less on the details of the negotiation process and makes more robust predictions in the long term as compared to in the short term. However, the extant literature on population dynamics for behavior adjustment has only examined the current situation. To offset this limitation, we propose a synergy of evolutionary algorithm and reinforcement learning to investigate long-term collective performance and strategy evolution. The model adopts reinforcement learning with a tradeoff between historical and current information to make decisions when the strategies of agents evolve through repeated interactions. The results demonstrate that the strategies in populations converge to stable states, and the agents gradually form steady negotiation habits. Agents that adopt reinforcement learning perform better in payoff, fairness, and stableness than their counterparts using classic evolutionary algorithm. PMID:25048108

  18. Evolution with reinforcement learning in negotiation.

    PubMed

    Zou, Yi; Zhan, Wenjie; Shao, Yuan

    2014-01-01

    Adaptive behavior depends less on the details of the negotiation process and makes more robust predictions in the long term as compared to in the short term. However, the extant literature on population dynamics for behavior adjustment has only examined the current situation. To offset this limitation, we propose a synergy of evolutionary algorithm and reinforcement learning to investigate long-term collective performance and strategy evolution. The model adopts reinforcement learning with a tradeoff between historical and current information to make decisions when the strategies of agents evolve through repeated interactions. The results demonstrate that the strategies in populations converge to stable states, and the agents gradually form steady negotiation habits. Agents that adopt reinforcement learning perform better in payoff, fairness, and stableness than their counterparts using classic evolutionary algorithm.

  19. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  20. The formation of stable pH gradients with weak monovalent buffers for isoelectric focusing in free solution

    NASA Technical Reports Server (NTRS)

    Mosher, Richard A.; Thormann, Wolfgang; Graham, Aly; Bier, Milan

    1985-01-01

    Two methods which utilize simple buffers for the generation of stable pH gradients (useful for preparative isoelectric focusing) are compared and contrasted. The first employs preformed gradients comprised of two simple buffers in density-stabilized free solution. The second method utilizes neutral membranes to isolate electrolyte reservoirs of constant composition from the separation column. It is shown by computer simulation that steady-state gradients can be formed at any pH range with any number of components in such a system.

  1. Nonlinear Wavelength Selection in Surface Faceting under Electromigration

    NASA Astrophysics Data System (ADS)

    Barakat, Fatima; Martens, Kirsten; Pierre-Louis, Olivier

    2012-08-01

    We report on the control of the faceting of crystal surfaces by means of surface electromigration. When electromigration reinforces the faceting instability, we find perpetual coarsening with a wavelength increasing as t1/2. For strongly stabilizing electromigration, the surface is stable. For weakly stabilizing electromigration, a cellular pattern is obtained, with a nonlinearly selected wavelength. The selection mechanism is not caused by an instability of steady states, as suggested by previous works in the literature. Instead, the dynamics is found to exhibit coarsening before reaching a continuous family of stable nonequilibrium steady states.

  2. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    NASA Astrophysics Data System (ADS)

    Kim, Changho; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.; Donev, Aleksandar

    2017-03-01

    We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. By comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.

  3. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    DOE PAGES

    Kim, Changho; Nonaka, Andy; Bell, John B.; ...

    2017-03-24

    Here, we develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules,more » to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. Furthermore, by comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.« less

  4. A Novel Control Strategy for Autonomous Operation of Isolated Microgrid with Prioritized Loads

    NASA Astrophysics Data System (ADS)

    Kumar, R. Hari; Ushakumari, S.

    2018-05-01

    Maintenance of power balance between generation and demand is one of the most critical requirements for the stable operation of a power system network. To mitigate the power imbalance during the occurrence of any disturbance in the system, fast acting algorithms are inevitable. This paper proposes a novel algorithm for load shedding and network reconfiguration in an isolated microgrid with prioritized loads and multiple islands, which will help to quickly restore the system in the event of a fault. The performance of the proposed algorithm is enhanced using genetic algorithm and its effectiveness is illustrated with simulation results on modified Consortium for Electric Reliability Technology Solutions (CERTS) microgrid.

  5. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers.

  6. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  7. DOA Estimation for Underwater Wideband Weak Targets Based on Coherent Signal Subspace and Compressed Sensing

    PubMed Central

    2018-01-01

    Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642

  8. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  9. Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control

    NASA Astrophysics Data System (ADS)

    Song, Pucha; Zhao, Haiquan

    2018-07-01

    The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.

  10. Formation Flying Design and Applications in Weak Stability Boundary Regions

    NASA Technical Reports Server (NTRS)

    Folta, David

    2003-01-01

    Weak Stability regions serve as superior locations for interferometric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observing efficiency. Design of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of WSB solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in WSB regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numerical methods for attaining constrained formation geometries and controlling their dynamical evolution. This paper presents a survey of formation missions in the WSB regions and a brief description of the formation design using numerical and dynamical techniques.

  11. Synchronous parallel spatially resolved stochastic cluster dynamics

    DOE PAGES

    Dunn, Aaron; Dingreville, Rémi; Martínez, Enrique; ...

    2016-04-23

    In this work, a spatially resolved stochastic cluster dynamics (SRSCD) model for radiation damage accumulation in metals is implemented using a synchronous parallel kinetic Monte Carlo algorithm. The parallel algorithm is shown to significantly increase the size of representative volumes achievable in SRSCD simulations of radiation damage accumulation. Additionally, weak scaling performance of the method is tested in two cases: (1) an idealized case of Frenkel pair diffusion and annihilation, and (2) a characteristic example problem including defect cluster formation and growth in α-Fe. For the latter case, weak scaling is tested using both Frenkel pair and displacement cascade damage.more » To improve scaling of simulations with cascade damage, an explicit cascade implantation scheme is developed for cases in which fast-moving defects are created in displacement cascades. For the first time, simulation of radiation damage accumulation in nanopolycrystals can be achieved with a three dimensional rendition of the microstructure, allowing demonstration of the effect of grain size on defect accumulation in Frenkel pair-irradiated α-Fe.« less

  12. Discovering loose group movement patterns from animal trajectories

    USGS Publications Warehouse

    Wang, Yuwei; Luo, Ze; Xiong, Yan; Prosser, Diann J.; Newman, Scott H.; Takekawa, John Y.; Yan, Baoping

    2015-01-01

    The technical advances of positioning technologies enable us to track animal movements at finer spatial and temporal scales, and further help to discover a variety of complex interactive relationships. In this paper, considering the loose gathering characteristics of the real-life groups' members during the movements, we propose two kinds of loose group movement patterns and corresponding discovery algorithms. Firstly, we propose the weakly consistent group movement pattern which allows the gathering of a part of the members and individual temporary leave from the whole during the movements. To tolerate the high dispersion of the group at some moments (i.e. to adapt the discontinuity of the group's gatherings), we further scheme the weakly consistent and continuous group movement pattern. The extensive experimental analysis and comparison with the real and synthetic data shows that the group pattern discovery algorithms proposed in this paper are similar to the the real-life frequent divergences of the members during the movements, can discover more complete memberships, and have considerable performance.

  13. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  14. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  15. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  16. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    PubMed Central

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2017-01-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  17. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    PubMed

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  18. Communication-avoiding symmetric-indefinite factorization

    DOE PAGES

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James; ...

    2014-11-13

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  19. Communication-avoiding symmetric-indefinite factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  20. Preliminary evaluation of the Environmental Research Institute of Michigan crop calendar shift algorithm for estimation of spring wheat development stage. [North Dakota, South Dakota, Montana, and Minnesota

    NASA Technical Reports Server (NTRS)

    Phinney, D. E. (Principal Investigator)

    1980-01-01

    An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.

  1. Finite-element time-domain algorithms for modeling linear Debye and Lorentz dielectric dispersions at low frequencies.

    PubMed

    Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen

    2003-09-01

    We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.

  2. The role of RhD agglutination for the detection of weak D red cells by anti-D flow cytometry.

    PubMed

    Grey, D E; Davies, J I; Connolly, M; Fong, E A; Erber, W N

    2005-04-01

    Anti-D flow cytometry is an accurate method for quantifying feto-maternal haemorrhage (FMH). However, weak D red cells with <1000 RhD sites are not detectable using this methodology but are immunogenic. As quantitation of RhD sites is not practical, an alternative approach is required to identify those weak D fetal red cells where anti-D flow cytometry is inappropriate. We describe a simple algorithm based on RhD agglutination and flow cytometry peak separation. All weak D (n = 34) gave weak agglutination with RUM-1 on immediate spin (grading

  3. Limitations and potentials of current motif discovery algorithms

    PubMed Central

    Hu, Jianjun; Li, Bin; Kihara, Daisuke

    2005-01-01

    Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194

  4. A robust return-map algorithm for general multisurface plasticity

    DOE PAGES

    Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...

    2016-06-16

    Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less

  5. Lump Solitons in Surface Tension Dominated Flows

    NASA Astrophysics Data System (ADS)

    Milewski, Paul; Berger, Kurt

    1999-11-01

    The Kadomtsev-Petviashvilli I equation (KPI) which models small-amplitude, weakly three-dimensional surface-tension dominated long waves is integrable and allows for algebraically decaying lump solitary waves. It is not known (theoretically or numerically) whether the full free-surface Euler equations support such solutions. We consider an intermediate model, the generalised Benney-Luke equation (gBL) which is isotropic (not weakly three-dimensional) and contains KPI as a limit. We show numerically that: 1. gBL supports lump solitary waves; 2. These waves collide elastically and are stable; 3. They are generated by resonant flow over an obstacle.

  6. Theoretical investigation of the weak interaction between graphene and alcohol solvents

    NASA Astrophysics Data System (ADS)

    Wang, Haining; Chen, Sian; Lu, Shanfu; Xiang, Yan

    2017-05-01

    The dispersion of graphene in five different alcohol solvents was investigated by evaluating the binding energy between graphene and alcohol molecules using DFT-D method. The calculation showed the most stable binding energy appeared at the distance of ∼3.5 Å between graphene and alcohol molecules and increased linearly as changing the alcohol from methanol to 1-pentanol. The weak interaction was further graphically illustrated using the reduced density gradient method. The theoretical study revealed alcohols with more carbon atoms could be a good starting point for screening suitable solvents for graphene dispersion.

  7. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  8. Realization of Multi-Stable Ground States in a Nematic Liquid Crystal by Surface and Electric Field Modification

    NASA Astrophysics Data System (ADS)

    Gwag, Jin Seog; Kim, Young-Ki; Lee, Chang Hoon; Kim, Jae-Hoon

    2015-06-01

    Owing to the significant price drop of liquid crystal displays (LCDs) and the efforts to save natural resources, LCDs are even replacing paper to display static images such as price tags and advertising boards. Because of a growing market demand on such devices, the LCD that can be of numerous surface alignments of directors as its ground state, the so-called multi-stable LCD, comes into the limelight due to the great potential for low power consumption. However, the multi-stable LCD with industrial feasibility has not yet been successfully performed. In this paper, we propose a simple and novel configuration for the multi-stable LCD. We demonstrate experimentally and theoretically that a battery of stable surface alignments can be achieved by the field-induced surface dragging effect on an aligning layer with a weak surface anchoring. The simplicity and stability of the proposed system suggest that it is suitable for the multi-stable LCDs to display static images with low power consumption and thus opens applications in various fields.

  9. X-band Electron Paramagnetic Resonance Investigation of Stable Organic Radicals Present under Cold Stratification in 'Fuji' Apple Seeds.

    PubMed

    Nakagawa, Kouichi; Matsumoto, Kazuhiro; Chaiserm, Nattakan; Priprem, Aroonsri

    2017-01-01

    We investigated stable organic radicals formed in response to cold stratification in 'Fuji' apple seeds using X-band (9 GHz) electron paramagnetic resonance (EPR) technique. This technique primarily detected two paramagnetic species in each seed. These two different radical species were assigned as a stable organic radical and Mn 2+ species based on the g values and hyperfine components. Signal from the stable radicals was noted at a g value of about 2.00 and was strong and relatively stable. Significant radical intensity changes were observed in apple seeds on refrigeration along with water supplementation. The strongest radical intensity and a very weak Mn 2+ signal were also observed for the seeds kept in moisture-containing sand in a refrigerator. Noninvasive EPR of the radicals present in each seed revealed that the stable radicals were located primarily in the seed coat. These results indicate that the significant radical intensity changes in apple seeds under refrigeration for at least 90 days followed by water supplementation for one week, can be related to cold stratification of the seeds.

  10. MetExtract: a new software tool for the automated comprehensive extraction of metabolite-derived LC/MS signals in metabolomics research.

    PubMed

    Bueschl, Christoph; Kluger, Bernhard; Berthiller, Franz; Lirk, Gerald; Winkler, Stephan; Krska, Rudolf; Schuhmacher, Rainer

    2012-03-01

    Liquid chromatography-mass spectrometry (LC/MS) is a key technique in metabolomics. Since the efficient assignment of MS signals to true biological metabolites becomes feasible in combination with in vivo stable isotopic labelling, our aim was to provide a new software tool for this purpose. An algorithm and a program (MetExtract) have been developed to search for metabolites in in vivo labelled biological samples. The algorithm makes use of the chromatographic characteristics of the LC/MS data and detects MS peaks fulfilling the criteria of stable isotopic labelling. As a result of all calculations, the algorithm specifies a list of m/z values, the corresponding number of atoms of the labelling element (e.g. carbon) together with retention time and extracted adduct-, fragment- and polymer ions. Its function was evaluated using native (12)C- and uniformly (13)C-labelled standard substances. MetExtract is available free of charge and warranty at http://code.google.com/p/metextract/. Precompiled executables are available for Windows operating systems. Supplementary data are available at Bioinformatics online.

  11. Ternary alloy material prediction using genetic algorithm and cluster expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chong

    2015-12-01

    This thesis summarizes our study on the crystal structures prediction of Fe-V-Si system using genetic algorithm and cluster expansion. Our goal is to explore and look for new stable compounds. We started from the current ten known experimental phases, and calculated formation energies of those compounds using density functional theory (DFT) package, namely, VASP. The convex hull was generated based on the DFT calculations of the experimental known phases. Then we did random search on some metal rich (Fe and V) compositions and found that the lowest energy structures were body centered cube (bcc) underlying lattice, under which we didmore » our computational systematic searches using genetic algorithm and cluster expansion. Among hundreds of the searched compositions, thirteen were selected and DFT formation energies were obtained by VASP. The stability checking of those thirteen compounds was done in reference to the experimental convex hull. We found that the composition, 24-8-16, i.e., Fe 3VSi 2 is a new stable phase and it can be very inspiring to the future experiments.« less

  12. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  13. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  14. A lattice relaxation algorithm for three-dimensional Poisson-Nernst-Planck theory with application to ion transport through the gramicidin A channel.

    PubMed Central

    Kurnikova, M G; Coalson, R D; Graf, P; Nitzan, A

    1999-01-01

    A lattice relaxation algorithm is developed to solve the Poisson-Nernst-Planck (PNP) equations for ion transport through arbitrary three-dimensional volumes. Calculations of systems characterized by simple parallel plate and cylindrical pore geometries are presented in order to calibrate the accuracy of the method. A study of ion transport through gramicidin A dimer is carried out within this PNP framework. Good agreement with experimental measurements is obtained. Strengths and weaknesses of the PNP approach are discussed. PMID:9929470

  15. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  16. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  17. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  18. An atlas of Rapp's 180-th order geopotential.

    NASA Astrophysics Data System (ADS)

    Melvin, P. J.

    1986-08-01

    Deprit's 1979 approach to the summation of the spherical harmonic expansion of the geopotential has been modified to spherical components and normalized Legendre polynomials. An algorithm has been developed which produces ten fields at the users option: the undulations of the geoid, three anomalous components of the gravity vector, or six components of the Hessian of the geopotential (gravity gradient). The algorithm is stable to high orders in single precision and does not treat the polar regions as a special case. Eleven contour maps of components of the anomalous geopotential on the surface of the ellipsoid are presented to validate the algorithm.

  19. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  20. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  1. Loss of echogenicity and onset of cavitation from echogenic liposomes: pulse repetition frequency independence

    PubMed Central

    Radhakrishnan, Kirthi; Haworth, Kevin J; Peng, Tao; McPherson, David D.; Holland, Christy K.

    2014-01-01

    Echogenic liposomes (ELIP) are being developed for the early detection and treatment of atherosclerotic lesions. An 80% loss of echogenicity of ELIP (Radhakrishnan et al. 2013) has been shown to be concomitant with the onset of stable and inertial cavitation. The ultrasound pressure amplitude at which this occurs is weakly dependent on pulse duration. Smith et al. (2007) have reported that the rapid fragmentation threshold of ELIP (based on changes in echogenicity) is dependent on the insonation pulse repetition frequency (PRF). The current study evaluates the relationship between loss of echogenicity and cavitation emissions from ELIP insonified by duplex Doppler pulses at four PRFs (1.25 kHz, 2.5 kHz, 5 kHz, and 8.33 kHz). Loss of echogenicity was evaluated on B-mode images of ELIP. Cavitation emissions from ELIP were recorded passively on a focused single-element transducer and a linear array. Emissions recorded by the linear array were beamformed and the spatial widths of stable and inertial cavitation emissions were compared to the calibrated azimuthal beamwidth of the Doppler pulse exceeding the stable and inertial cavitation thresholds. The inertial cavitation thresholds had a very weak dependence on PRF and stable cavitation thresholds were independent of PRF. The spatial widths of the cavitation emissions recorded by the passive cavitation imaging system agreed with the calibrated Doppler beamwidths. The results also show that 64%–79% loss of echogenicity can be used to classify the presence or absence of cavitation emissions with greater than 80% accuracy. PMID:25438849

  2. Loss of echogenicity and onset of cavitation from echogenic liposomes: pulse repetition frequency independence.

    PubMed

    Radhakrishnan, Kirthi; Haworth, Kevin J; Peng, Tao; McPherson, David D; Holland, Christy K

    2015-01-01

    Echogenic liposomes (ELIP) are being developed for the early detection and treatment of atherosclerotic lesions. An 80% loss of echogenicity of ELIP has been found to be concomitant with the onset of stable and inertial cavitation. The ultrasound pressure amplitude at which this occurs is weakly dependent on pulse duration. It has been reported that the rapid fragmentation threshold of ELIP (based on changes in echogenicity) is dependent on the insonation pulse repetition frequency (PRF). The study described here evaluates the relationship between loss of echogenicity and cavitation emissions from ELIP insonified by duplex Doppler pulses at four PRFs (1.25, 2.5, 5 and 8.33 kHz). Loss of echogenicity was evaluated on B-mode images of ELIP. Cavitation emissions from ELIP were recorded passively on a focused single-element transducer and a linear array. Emissions recorded by the linear array were beamformed, and the spatial widths of stable and inertial cavitation emissions were compared with the calibrated azimuthal beamwidth of the Doppler pulse exceeding the stable and inertial cavitation thresholds. The inertial cavitation thresholds had a very weak dependence on PRF, and stable cavitation thresholds were independent of PRF. The spatial widths of the cavitation emissions recorded by the passive cavitation imaging system agreed with the calibrated Doppler beamwidths. The results also indicate that 64%-79% loss of echogenicity can be used to classify the presence or absence of cavitation emissions with greater than 80% accuracy. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. Study of breakup and transfer of weakly bound nucleus 6Li to explore the low energy reaction dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, G. L.; Zhang, G. X.; Hu, S. P.; Zhang, H. Q.; Gomes, P. R. S.; Lubian, J.; Guo, C. L.; Wu, X. G.; Yang, J. C.; Zheng, Y.; Li, C. B.; He, C. Y.; Zhong, J.; Li, G. S.; Yao, Y. J.; Guo, M. F.; Sun, H. B.; Valiente-Dobòn, J. J.; Goasduff, A.; Siciliano, M.; Galtarosa, F.; Francesco, R.; Testov, D.; Mengoni, D.; Bazzacco, D.; John, P. R.; Qu, W. W.; Wang, F.; Zheng, L.; Yu, L.; Chen, Q. M.; Luo, P. W.; Li, H. W.; Wu, Y. H.; Zhou, W. K.; Zhu, B. J.; Li, E. T.; Hao, X.

    2017-11-01

    Investigation of the breakup and transfer effect of weakly bound nuclei on the fusion process has been an interesting research topic in the past several years. However, owing to the low intensities of the presently available radioactive ion beam (RIB), it is difficult to clearly explore the reaction mechanisms of nuclear systems with unstable nuclei. In comparison with RIB, the beam intensities of stable weakly bound nuclei such as 6,7Li and 9Be, which have significant breakup probability, are orders of magnitude higher. Precise fusion measurements have already been performed with those stable weakly bound nuclei, and the effect of breakup of those nuclei on the fusion process has been extensively studied. Those nuclei indicated large production cross sections for particles other than the α + x breakup. The particles are originated from non-capture breakup (NCBU), incomplete fusion (ICF) and transfer processes. However, the conclusion of reaction dynamics was not clear and has the contradiction. In our previous experiments we have performed 6Li+96Zr and 154Sm at HI-13 Tandem accelerator of China Institute of Atomic Energy (CIAE) by using HPGe array. It is shown that there is a small complete fusion (CF) suppression on medium-mass target nucleus 96Zr different from about 35% suppression on heavier target nucleus 154Sm at near-barrier energies. It seems that the CF suppression factor depends on the charge of target nuclei. We also observed one neutron transfer process. However, the experimental data are scarce for medium-mass target nuclei. In order to have a proper understanding of the influence of breakup and transfer of weakly bound projectiles on the fusion process, we performed the 6Li+89Y experiment with incident energies of 22 MeV and 34 MeV on Galileo array in cooperation with Si-ball EUCLIDES at Legnaro National Laboratory (LNL) in Italy. Using particle-particle and particle-γ coincidences, the different reaction mechanisms can be clearly explored.

  4. Career Commitment in Nursing.

    ERIC Educational Resources Information Center

    Gardner, Diane L.

    1992-01-01

    A longitudinal, repeated-measures descriptive survey used to measure career commitment and its relationship to turnover and work performance in 320 newly employed registered nurses at one hospital found that career commitment is not a stable phenomenon. The direct association between career commitment and turnover and with job performance is weak.…

  5. Managing Distributed Systems with Smart Subscriptions

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Lee, Diana D.; Swanson, Keith (Technical Monitor)

    2000-01-01

    We describe an event-based, publish-and-subscribe mechanism based on using 'smart subscriptions' to recognize weakly-structured events. We present a hierarchy of subscription languages (propositional, predicate, temporal and agent) and algorithms for efficiently recognizing event matches. This mechanism has been applied to the management of distributed applications.

  6. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, D.A.; Keller, R.A.

    1982-06-08

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be rlated to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10/sup -5/ cm/sup -1/ has been demonstrated using this technique.

  7. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, D.A.; Keller, R.A.

    1985-10-01

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10[sup [minus]5] cm[sup [minus]1] has been demonstrated using this technique. 6 figs.

  8. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  9. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE PAGES

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.; ...

    2017-09-13

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  10. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, David A.; Keller, Richard A.

    1985-01-01

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10.sup.-5 cm.sup.-1 has been demonstrated using this technique.

  11. An experimental adaptive array to suppress weak interfering signals

    NASA Technical Reports Server (NTRS)

    Walton, Eric K.; Gupta, Inder J.; Ksienski, Aharon A.; Ward, James

    1988-01-01

    An experimental adaptive antenna system to suppress weak interfering signals is described. It is a sidelobe canceller with two auxiliary elements. Modified feedback loops are used to control the array weights. The received signals are simulated in hardware for parameter control. Digital processing is used for algorithm implementation and performance evaluation. The experimental results are presented. They show that interfering signals as much as 10 dB below the thermal noise level in the main channel are suppressed by 20-30 dB. Such a system has potential application in suppressing the interference encountered in direct broadcast satellite communication systems.

  12. An ensemble pulsar time

    NASA Technical Reports Server (NTRS)

    Petit, Gerard; Thomas, Claudine; Tavella, Patrizia

    1993-01-01

    Millisecond pulsars are galactic objects that exhibit a very stable spinning period. Several tens of these celestial clocks have now been discovered, which opens the possibility that an average time scale may be deduced through a long-term stability algorithm. Such an ensemble average makes it possible to reduce the level of the instabilities originating from the pulsars or from other sources of noise, which are unknown but independent. The basis for such an algorithm is presented and applied to real pulsar data. It is shown that pulsar time could shortly become more stable than the present atomic time, for averaging times of a few years. Pulsar time can also be used as a flywheel to maintain the accuracy of atomic time in case of temporary failure of the primary standards, or to transfer the improved accuracy of future standards back to the present.

  13. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part II: General formulation

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi

    2017-08-01

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems.

  14. Optimization of self-interstitial clusters in 3C-SiC with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ko, Hyunseok; Kaczmarowski, Amy; Szlufarska, Izabela; Morgan, Dane

    2017-08-01

    Under irradiation, SiC develops damage commonly referred to as black spot defects, which are speculated to be self-interstitial atom clusters. To understand the evolution of these defect clusters and their impacts (e.g., through radiation induced swelling) on the performance of SiC in nuclear applications, it is important to identify the cluster composition, structure, and shape. In this work the genetic algorithm code StructOpt was utilized to identify groundstate cluster structures in 3C-SiC. The genetic algorithm was used to explore clusters of up to ∼30 interstitials of C-only, Si-only, and Si-C mixtures embedded in the SiC lattice. We performed the structure search using Hamiltonians from both density functional theory and empirical potentials. The thermodynamic stability of clusters was investigated in terms of their composition (with a focus on Si-only, C-only, and stoichiometric) and shape (spherical vs. planar), as a function of the cluster size (n). Our results suggest that large Si-only clusters are likely unstable, and clusters are predominantly C-only for n ≤ 10 and stoichiometric for n > 10. The results imply that there is an evolution of the shape of the most stable clusters, where small clusters are stable in more spherical geometries while larger clusters are stable in more planar configurations. We also provide an estimated energy vs. size relationship, E(n), for use in future analysis.

  15. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  16. Application of adaptive filters in denoising magnetocardiogram signals

    NASA Astrophysics Data System (ADS)

    Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.

    2017-05-01

    Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.

  17. Evolving biomarkers improve prediction of long-term mortality in patients with stable coronary artery disease: the BIO-VILCAD score.

    PubMed

    Kleber, M E; Goliasch, G; Grammer, T B; Pilz, S; Tomaschitz, A; Silbernagel, G; Maurer, G; März, W; Niessner, A

    2014-08-01

    Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.

  18. Layer detection and snowpack stratigraphy characterisation from digital penetrometer signals

    NASA Astrophysics Data System (ADS)

    Floyer, James Antony

    Forecasting for slab avalanches benefits from precise measurements of snow stratigraphy. Snow penetrometers offer the possibility of providing detailed information about snowpack structure; however, their use has yet to be adopted by avalanche forecasting operations in Canada. A manually driven, variable rate force-resistance penetrometer is tested for its ability to measure snowpack information suitable for avalanche forecasting and for spatial variability studies on snowpack properties. Subsequent to modifications, weak layers of 5 mm thick are reliably detected from the penetrometer signals. Rate effects are investigated and found to be insignificant for push velocities between 0.5 to 100 cm s-1 for dry snow. An analysis of snow deformation below the penetrometer tip is presented using particle image velocimetry and two zones associated with particle deflection are identified. The compacted zone is a region of densified snow that is pushed ahead of the penetrometer tip; the deformation zone is a broader zone surrounding the compacted zone, where deformation is in compression and in shear. Initial formation of the compacted zone is responsible for pronounced force spikes in the penetrometer signal. A layer tracing algorithm for tracing weak layers, crusts and interfaces across transects or grids of penetrometer profiles is presented. This algorithm uses Wiener spiking deconvolution to detect a portion of the signal manually identified as a layer in one profile across to an adjacent profile. Layer tracing is found to be most effective for tracing crusts and prominent weak layers, although weak layers close to crusts were not well traced. A framework for extending this method for detecting weak layers with no prior knowledge of weak layer existence is also presented. A study relating the fracture character of layers identified in compression tests is presented. A multivariate model is presented that distinguishes between sudden and other fracture characters 80% of the time. Transects of penetrometer profiles are presented over several alpine terrain features commonly associated with spatial variability of snowpack properties. Physical processes relating to the variability of certain snowpack properties revealed in the transects is discussed. The importance of characteristic signatures for training avalanche practitioners to recognise potentially unstable terrain is also discussed.

  19. Global muscle dysfunction as a risk factor of readmission to hospital due to COPD exacerbations.

    PubMed

    Vilaró, Jordi; Ramirez-Sarmiento, Alba; Martínez-Llorens, Juana M A; Mendoza, Teresa; Alvarez, Miguel; Sánchez-Cayado, Natalia; Vega, Angeles; Gimeno, Elena; Coronell, Carlos; Gea, Joaquim; Roca, Josep; Orozco-Levi, Mauricio

    2010-12-01

    Exacerbations of chronic obstructive pulmonary disease (COPD) are associated with several modifiable (sedentary life-style, smoking, malnutrition, hypoxemia) and non-modifiable (age, co-morbidities, severity of pulmonary function, respiratory infections) risk factors. We hypothesise that most of these risk factors may have a converging and deleterious effects on both respiratory and peripheral muscle function in COPD patients. A multicentre study was carried out in 121 COPD patients (92% males, 63 ± 11 yr, FEV(1), 49 ± 17%pred). Assessments included anthropometrics, lung function, body composition using bioelectrical impedance analysis (BIA), and global muscle function (peripheral muscle (dominant and non-dominant hand grip strength, HGS), inspiratory (PI(max)), and expiratory (PE(max)) muscle strength). GOLD stage, clinical status (stable vs. non-stable) and both current and past hospital admissions due to COPD exacerbations were included as covariates in the analyses. Respiratory and peripheral muscle weakness were observed in all subsets of patients. Muscle weakness, was significantly associated with both current and past hospitalisations. Patients with history of multiple admissions showed increased global muscle weakness after adjusting by FEV(1) (PE(max), OR = 6.8, p < 0.01; PI(max), OR = 2.9, p < 0.05; HGSd, OR = 2.4, and HGSnd, OR = 2.6, p = 0.05). Moreover, a significant increase in both respiratory and peripheral muscle weakness, after adjusting by FEV(1), was associated with current acute exacerbations. Muscle dysfunction, adjusted by GOLD stage, is associated with an increased risk of hospital admissions due to acute episodes of exacerbation of the disease. Current exacerbations further deteriorate muscle dysfunction. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  1. Accelerated Dimension-Independent Adaptive Metropolis

    DOE PAGES

    Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...

    2016-10-27

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  2. A Direction of Arrival Estimation Algorithm Based on Orthogonal Matching Pursuit

    NASA Astrophysics Data System (ADS)

    Tang, Junyao; Cao, Fei; Liu, Lipeng

    2018-02-01

    The results show that the modified DSM is able to predict local buckling capacity of hot-rolled RHS and SHS accurately. In order to solve the problem of the weak ability of anti-radiation missile against active decoy in modern electronic warfare, a direction of arrival estimation algorithm based on orthogonal matching pursuit is proposed in this paper. The algorithm adopts the compression sensing technology. This paper uses array antennas to receive signals, gets the sparse representation of signals, and then designs the corresponding perception matrix. The signal is reconstructed by orthogonal matching pursuit algorithm to estimate the optimal solution. At the same time, the error of the whole measurement system is analyzed and simulated, and the validity of this algorithm is verified. The algorithm greatly reduces the measurement time, the quantity of equipment and the total amount of the calculation, and accurately estimates the angle and strength of the incoming signal. This technology can effectively improve the angle resolution of the missile, which is of reference significance to the research of anti-active decoy.

  3. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    PubMed Central

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  4. Accelerated Dimension-Independent Adaptive Metropolis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuxin; Keyes, David E.; Law, Kody J.

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  5. Active Learning Using Hint Information.

    PubMed

    Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien

    2015-08-01

    The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.

  6. Algorithms for differentiating between images of heterogeneous tissue across fluorescence microscopes.

    PubMed

    Chitalia, Rhea; Mueller, Jenna; Fu, Henry L; Whitley, Melodi Javid; Kirsch, David G; Brown, J Quincy; Willett, Rebecca; Ramanujam, Nimmi

    2016-09-01

    Fluorescence microscopy can be used to acquire real-time images of tissue morphology and with appropriate algorithms can rapidly quantify features associated with disease. The objective of this study was to assess the ability of various segmentation algorithms to isolate fluorescent positive features (FPFs) in heterogeneous images and identify an approach that can be used across multiple fluorescence microscopes with minimal tuning between systems. Specifically, we show a variety of image segmentation algorithms applied to images of stained tumor and muscle tissue acquired with 3 different fluorescence microscopes. Results indicate that a technique called maximally stable extremal regions followed by thresholding (MSER + Binary) yielded the greatest contrast in FPF density between tumor and muscle images across multiple microscopy systems.

  7. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.

  8. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  9. Conformational stability as a design target to control protein aggregation.

    PubMed

    Costanzo, Joseph A; O'Brien, Christopher J; Tiller, Kathryn; Tamargo, Erin; Robinson, Anne Skaja; Roberts, Christopher J; Fernandez, Erik J

    2014-05-01

    Non-native protein aggregation is a prevalent problem occurring in many biotechnological manufacturing processes and can compromise the biological activity of the target molecule or induce an undesired immune response. Additionally, some non-native aggregation mechanisms lead to amyloid fibril formation, which can be associated with debilitating diseases. For natively folded proteins, partial or complete unfolding is often required to populate aggregation-prone conformational states, and therefore one proposed strategy to mitigate aggregation is to increase the free energy for unfolding (ΔGunf) prior to aggregation. A computational design approach was tested using human γD crystallin (γD-crys) as a model multi-domain protein. Two mutational strategies were tested for their ability to reduce/increase aggregation rates by increasing/decreasing ΔGunf: stabilizing the less stable domain and stabilizing the domain-domain interface. The computational protein design algorithm, RosettaDesign, was implemented to identify point variants. The results showed that although the predicted free energies were only weakly correlated with the experimental ΔGunf values, increased/decreased aggregation rates for γD-crys correlated reasonably well with decreases/increases in experimental ΔGunf, illustrating improved conformational stability as a possible design target to mitigate aggregation. However, the results also illustrate that conformational stability is not the sole design factor controlling aggregation rates of natively folded proteins.

  10. Maximal switchability of centralized networks

    NASA Astrophysics Data System (ADS)

    Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu

    2016-08-01

    We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.

  11. Design of handwriting drawing board based on common copper clad laminate

    NASA Astrophysics Data System (ADS)

    Wang, Hongyuan; Gao, Wenzhi; Wang, Yuan

    2015-02-01

    Handwriting drawing board is not only a subject which can be used to write and draw, but also a method to measure and process weak signals. This design adopts 8051 single chip microprocessor as the main controller. It applies a constant-current source[1][2] to copper plate and collects the voltage value according to the resistance divider effect. Then it amplifies the signal with low-noise and high-precision amplifier[3] AD620 which is placed in the low impedance and anti-interference pen. It converts analog signal to digital signal by an 11-channel, 12-bit A/D converter TLC2543. Adoption of average filtering algorithm can effectively improve the measuring accuracy, reduce the error and make the collected voltage signal more stable. The accurate position can be detected by scanning the horizontal and vertical ordinates with the analog switch via the internal bridge of module L298 which can change the direction of X-Y axis signal scan. DM12864 is used as man-machine interface and this hominization design is convenient for man-machine communication. This collecting system has high accuracy, high stability and strong anti-interference capability. It's easy to control and has very large development space in the future.

  12. A Method to Retrieve Rainfall Rate Over Land from TRMM Microwave Imager Observations

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Over tropical land regions, rain rate maxima in mesoscale convective systems revealed by the Precipitation Radar (PR) flown on the Tropical Rainfall Measuring Mission (TRMM) satellite are found to correspond to thunderstorms, i.e., Cbs. These Cbs are reflected as minima in the 85 GHz brightness temperature, T85, observed by the TRMM Microwave Imager (TMI) radiometer. Because the magnitude of TMI observations do not discriminate satisfactorily convective and stratiform rain, we developed here a different TMI discrimination method. In this method, two types of Cbs, strong and weak, are inferred from the Laplacian of T85 at minima. Then, to retrieve rain rate, where T85 is less than 270 K, a weak (background) rain rate is deduced using T85 observations. Furthermore, over a circular area of 10 km radius centered at the location of each T85 minimum, an additional Cb component of rain rate is added to the background rain rate. This Cb component of rain rate is estimated with the help of (T19-T37) and T85 observations. Initially, our algorithm is calibrated with the PR rain rate measurements from 20 MCS rain events. After calibration, this method is applied to TMI data taken from several tropical land regions. With the help of the PR observations, we show that the spatial distribution and intensity of rain rate over land estimated from our algorithm are better than those given by the current TMI-Version-5 Algorithm. For this reason, our algorithm may be used to improve the current state of rain retrievals on land.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies inmore » the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.« less

  14. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  15. A numerical algorithm for MHD of free surface flows at low magnetic Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Du, Jian; Glimm, James; Xu, Zhiliang

    2007-10-01

    We have developed a numerical algorithm and computational software for the study of magnetohydrodynamics (MHD) of free surface flows at low magnetic Reynolds numbers. The governing system of equations is a coupled hyperbolic-elliptic system in moving and geometrically complex domains. The numerical algorithm employs the method of front tracking and the Riemann problem for material interfaces, second order Godunov-type hyperbolic solvers, and the embedded boundary method for the elliptic problem in complex domains. The numerical algorithm has been implemented as an MHD extension of FronTier, a hydrodynamic code with free interface support. The code is applicable for numerical simulations of free surface flows of conductive liquids or weakly ionized plasmas. The code has been validated through the comparison of numerical simulations of a liquid metal jet in a non-uniform magnetic field with experiments and theory. Simulations of the Muon Collider/Neutrino Factory target have also been discussed.

  16. Radiation-MHD Simulations of Pillars and Globules in HII Regions

    NASA Astrophysics Data System (ADS)

    Mackey, J.

    2012-07-01

    Implicit and explicit raytracing-photoionisation algorithms have been implemented in the author's radiation-magnetohydrodynamics code. The algorithms are described briefly and their efficiency and parallel scaling are investigated. The implicit algorithm is more efficient for calculations where ionisation fronts have very supersonic velocities, and the explicit algorithm is favoured in the opposite limit because of its better parallel scaling. The implicit method is used to investigate the effects of initially uniform magnetic fields on the formation and evolution of dense pillars and cometary globules at the boundaries of HII regions. It is shown that for weak and medium field strengths an initially perpendicular field is swept into alignment with the pillar during its dynamical evolution, matching magnetic field observations of the ‘Pillars of Creation’ in M16. A strong perpendicular magnetic field remains in its initial configuration and also confines the photoevaporation flow into a bar-shaped, dense, ionised ribbon which partially shields the ionisation front.

  17. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  18. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  19. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    PubMed

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  20. [MicroRNA Target Prediction Based on Support Vector Machine Ensemble Classification Algorithm of Under-sampling Technique].

    PubMed

    Chen, Zhiru; Hong, Wenxue

    2016-02-01

    Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.

  1. A weak lensing analysis of the PLCK G100.2-30.4 cluster

    NASA Astrophysics Data System (ADS)

    Radovich, M.; Formicola, I.; Meneghetti, M.; Bartalucci, I.; Bourdin, H.; Mazzotta, P.; Moscardini, L.; Ettori, S.; Arnaud, M.; Pratt, G. W.; Aghanim, N.; Dahle, H.; Douspis, M.; Pointecouteau, E.; Grado, A.

    2015-07-01

    We present a mass estimate of the Planck-discovered cluster PLCK G100.2-30.4, derived from a weak lensing analysis of deep Subaru griz images. We perform a careful selection of the background galaxies using the multi-band imaging data, and undertake the weak lensing analysis on the deep (1 h) r -band image. The shape measurement is based on the Kaiser-Squires-Broadhurst algorithm; we adopt the PSFex software to model the point spread function (PSF) across the field and correct for this in the shape measurement. The weak lensing analysis is validated through extensive image simulations. We compare the resulting weak lensing mass profile and total mass estimate to those obtained from our re-analysis of XMM-Newton observations, derived under the hypothesis of hydrostatic equilibrium. The total integrated mass profiles agree remarkably well, within 1σ across their common radial range. A mass M500 ~ 7 × 1014M⊙ is derived for the cluster from our weak lensing analysis. Comparing this value to that obtained from our reanalysis of XMM-Newton data, we obtain a bias factor of (1-b) = 0.8 ± 0.1. This is compatible within 1σ with the value of (1-b) obtained in Planck 2015 from the calibration of the bias factor using newly available weak lensing reconstructed masses. Based on data collected at Subaru Telescope (University of Tokyo).

  2. A Formal Theory for Modular ERDF Ontologies

    NASA Astrophysics Data System (ADS)

    Analyti, Anastasia; Antoniou, Grigoris; Damásio, Carlos Viegas

    The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided.

  3. Sharper Graph-Theoretical Conditions for the Stabilization of Complex Reaction Networks

    PubMed Central

    Knight, Daniel; Shinar, Guy; Feinberg, Martin

    2015-01-01

    Across the landscape of all possible chemical reaction networks there is a surprising degree of stable behavior, despite what might be substantial complexity and nonlinearity in the governing differential equations. At the same time there are reaction networks, in particular those that arise in biology, for which richer behavior is exhibited. Thus, it is of interest to understand network-structural features whose presence enforces dull, stable behavior and whose absence permits the dynamical richness that might be necessary for life. We present conditions on a network’s Species-Reaction Graph that ensure a high degree of stable behavior, so long as the kinetic rate functions satisfy certain weak and natural constraints. These graph-theoretical conditions are considerably more incisive than those reported earlier. PMID:25600138

  4. Stable multi-domain spectral penalty methods for fractional partial differential equations

    NASA Astrophysics Data System (ADS)

    Xu, Qinwu; Hesthaven, Jan S.

    2014-01-01

    We propose stable multi-domain spectral penalty methods suitable for solving fractional partial differential equations with fractional derivatives of any order. First, a high order discretization is proposed to approximate fractional derivatives of any order on any given grids based on orthogonal polynomials. The approximation order is analyzed and verified through numerical examples. Based on the discrete fractional derivative, we introduce stable multi-domain spectral penalty methods for solving fractional advection and diffusion equations. The equations are discretized in each sub-domain separately and the global schemes are obtained by weakly imposed boundary and interface conditions through a penalty term. Stability of the schemes are analyzed and numerical examples based on both uniform and nonuniform grids are considered to highlight the flexibility and high accuracy of the proposed schemes.

  5. The conditional resampling model STARS: weaknesses of the modeling concept and development

    NASA Astrophysics Data System (ADS)

    Menz, Christoph

    2016-04-01

    The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.

  6. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  7. The DES Science Verification Weak Lensing Shear Catalogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarvis, M.

    We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less

  8. The DES Science Verification Weak Lensing Shear Catalogs

    DOE PAGES

    Jarvis, M.

    2016-05-01

    We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less

  9. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less

  10. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE PAGES

    Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...

    2015-10-06

    We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less

  11. Argumentation and Self: The Enactment of Identity in "Dances with Wolves."

    ERIC Educational Resources Information Center

    Lake, Randall A.

    1997-01-01

    States several postmodernist currents of thought have rejected the modernist faith in a stable, autonomous self (and the natural culture it inhabits) as a pernicious fiction. Argues for a dialectical view of self and culture, exploring the weaknesses of both modernist and postmodernist models through an analysis of "Dances with Wolves,"…

  12. When Can Information from Ordinal Scale Variables Be Integrated?

    ERIC Educational Resources Information Center

    Kemp, Simon; Grace, Randolph C.

    2010-01-01

    Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…

  13. Plume meander and dispersion in a stable boundary layer

    NASA Astrophysics Data System (ADS)

    Hiscox, April L.; Miller, David R.; Nappo, Carmen J.

    2010-11-01

    Continuous lidar measurements of elevated plume dispersion and corresponding micrometeorology data are analyzed to establish the relationship between plume behavior and nocturnal boundary layer dynamics. Contrasting nights of data from the JORNADA field campaign in the New Mexico desert are analyzed. The aerosol lidar measurements were used to separate the plume diffusion (plume spread) from plume meander (displacement). Mutiresolution decomposition was used to separate the turbulence scale (<90 s) from the submesoscale (>90 s). Durations of turbulent kinetic energy stationarity and the wind steadiness were used to characterize the local scale and submesoscale turbulence. Plume meander, driven by submesoscale wind motions, was responsible for most of the total horizontal plume dispersion in weak and variable winds and strong stability. This proportion was reduced in high winds (i.e., >4 m s-1), weakly stable conditions but remained the dominant dispersion mechanism. The remainder of the plume dispersion in all cases was accounted for by internal spread of the plume, which is a small eddy diffusion process driven by turbulence. Turbulence stationarity and the wind steadiness are demonstrated to be closely related to plume diffusion and plume meander, respectively.

  14. [Research on magnetic coupling centrifugal blood pump control based on a self-tuning fuzzy PI algorithm].

    PubMed

    Yang, Lei; Yang, Ming; Xu, Zihao; Zhuang, Xiaoqi; Wang, Wei; Zhang, Haibo; Han, Lu; Xu, Liang

    2014-10-01

    The purpose of this paper is to report the research and design of control system of magnetic coupling centrifugal blood pump in our laboratory, and to briefly describe the structure of the magnetic coupling centrifugal blood pump and principles of the body circulation model. The performance of blood pump is not only related to materials and structure, but also depends on the control algorithm. We studied the algorithm about motor current double-loop control for brushless DC motor. In order to make the algorithm adjust parameter change in different situations, we used the self-tuning fuzzy PI control algorithm and gave the details about how to design fuzzy rules. We mainly used Matlab Simulink to simulate the motor control system to test the performance of algorithm, and briefly introduced how to implement these algorithms in hardware system. Finally, by building the platform and conducting experiments, we proved that self-tuning fuzzy PI control algorithm could greatly improve both dynamic and static performance of blood pump and make the motor speed and the blood pump flow stable and adjustable.

  15. Development of autonomous multirotor platform for exploration missions

    NASA Astrophysics Data System (ADS)

    Czyba, Roman; Janik, Marcin; Kurgan, Oliver; Niezabitowski, Michał; Nocoń, Marek

    2016-06-01

    This paper outlines development process of unmanned multirotor aerial vehicle HF-4X, which consists of design and manufacturing semi-autonomous UAV dedicated for indoor flight, which would be capable of stable and controllable mission flight. A micro air vehicle was designed to participate in the International Micro Air Vehicle Conference and Flight Competition. In this paper much attention was paid to the structure of flight control system, stabilization algorithms, analysis of IMU sensors, fusion algorithms.

  16. Development of autonomous multirotor platform for exploration missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czyba, Roman; Janik, Marcin; Kurgan, Oliver

    This paper outlines development process of unmanned multirotor aerial vehicle HF-4X, which consists of design and manufacturing semi-autonomous UAV dedicated for indoor flight, which would be capable of stable and controllable mission flight. A micro air vehicle was designed to participate in the International Micro Air Vehicle Conference and Flight Competition. In this paper much attention was paid to the structure of flight control system, stabilization algorithms, analysis of IMU sensors, fusion algorithms.

  17. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  18. Dynamically induced many-body localization

    NASA Astrophysics Data System (ADS)

    Choi, Soonwon; Abanin, Dmitry A.; Lukin, Mikhail D.

    2018-03-01

    We show that a quantum phase transition from ergodic to many-body localized (MBL) phases can be induced via periodic pulsed manipulation of spin systems. Such a transition is enabled by the interplay between weak disorder and slow heating rates. Specifically, we demonstrate that the Hamiltonian of a weakly disordered ergodic spin system can be effectively engineered, by using sufficiently fast coherent controls, to yield a stable MBL phase, which in turn completely suppresses the energy absorption from external control field. Our results imply that a broad class of existing many-body systems can be used to probe nonequilibrium phases of matter for a long time, limited only by coupling to external environment.

  19. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  20. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    PubMed Central

    Foulley, Jean-Louis; Van Dyk, David A

    2000-01-01

    This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399

  1. Longitudinal Control for Mengshi Autonomous Vehicle via Cloud Model

    NASA Astrophysics Data System (ADS)

    Gao, H. B.; Zhang, X. Y.; Li, D. Y.; Liu, Y. C.

    2018-03-01

    Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control method of autonomous is a key technique which has drawn the attention of industry and academe. In this paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. An experiments is applied to test the implementation of the longitudinal control algorithm. Empirical results show that if the longitudinal control algorithm based Gauss cloud model are applied to calculate the acceleration, and the vehicles drive at different speeds, a stable longitudinal control effect is achieved.

  2. A stable compound of helium and sodium at high pressure

    DOE PAGES

    Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.; ...

    2017-02-06

    Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. As a result, we also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less

  3. A stable compound of helium and sodium at high pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.

    Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. We also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less

  4. A stable compound of helium and sodium at high pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.

    Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. As a result, we also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less

  5. Development and evaluation of an empirical diurnal sea surface temperature model

    NASA Astrophysics Data System (ADS)

    Weihs, R. R.; Bourassa, M. A.

    2013-12-01

    An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with accumulated wind stress and peak solar radiation to create an empirical relationship that approximates physical processes such as turbulence and heating memory (capacity) of the ocean. Weaknesses and strengths of the model, including potential spatial biases, will be discussed.

  6. Image contrast enhancement using adjacent-blocks-based modification for local histogram equalization

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Pan, Zhibin

    2017-11-01

    Infrared images usually have some non-ideal characteristics such as weak target-to-background contrast and strong noise. Because of these characteristics, it is necessary to apply the contrast enhancement algorithm to improve the visual quality of infrared images. Histogram equalization (HE) algorithm is a widely used contrast enhancement algorithm due to its effectiveness and simple implementation. But a drawback of HE algorithm is that the local contrast of an image cannot be equally enhanced. Local histogram equalization algorithms are proved to be the effective techniques for local image contrast enhancement. However, over-enhancement of noise and artifacts can be easily found in the local histogram equalization enhanced images. In this paper, a new contrast enhancement technique based on local histogram equalization algorithm is proposed to overcome the drawbacks mentioned above. The input images are segmented into three kinds of overlapped sub-blocks using the gradients of them. To overcome the over-enhancement effect, the histograms of these sub-blocks are then modified by adjacent sub-blocks. We pay more attention to improve the contrast of detail information while the brightness of the flat region in these sub-blocks is well preserved. It will be shown that the proposed algorithm outperforms other related algorithms by enhancing the local contrast without introducing over-enhancement effects and additional noise.

  7. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 < r/r200m < 0.5, and for cosmologies with Ωm ∈ [0.15, 0.40] and σ8 ∈ [0.6, 1.0]. For larger radii, 0.5 < r/r200m < 5, the accuracy degrades to ˜20 %, due to inaccurate modelling of the cosmological and redshift dependence of the splashback radius. For changes in cosmology allowed by current data, the residuals decrease to ≲ 2 % up to scales twice the virial radius. We illustrate the usefulness of the method by estimating the mean halo mass of a mock galaxy group sample. We find that the algorithm's accuracy is sufficient for current data. Improvements in the algorithm, particularly in the modelling of baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  8. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  9. The String Stability of a Trajectory-Based Interval Management Algorithm in the Midterm Airspace

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.

    2015-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides terminal controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain a precise spacing interval behind a target aircraft. As the percentage of IM equipped aircraft increases, controllers may provide IM clearances to sequences, or strings, of IM-equipped aircraft. It is important for these strings to maintain stable performance. This paper describes an analytic analysis of the string stability of the latest version of NASA's IM algorithm and a fast-time simulation designed to characterize the string performance of the IM algorithm. The analytic analysis showed that the spacing algorithm has stable poles, indicating that a spacing error perturbation will be reduced as a function of string position. The fast-time simulation investigated IM operations at two airports using constraints associated with the midterm airspace, including limited information of the target aircraft's intended speed profile and limited information of the wind forecast on the target aircraft's route. The results of the fast-time simulation demonstrated that the performance of the spacing algorithm is acceptable for strings of moderate length; however, there is some degradation in IM performance as a function of string position.

  10. Direct Adaptive Rejection of Vortex-Induced Disturbances for a Powered SPAR Platform

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Balas, Mark J.; VanZwieten, James H.; Driscoll, Frederick R.

    2009-01-01

    The Rapidly Deployable Stable Platform (RDSP) is a novel vessel designed to be a reconfigurable, stable at-sea platform. It consists of a detachable catamaran and spar, performing missions with the spar extending vertically below the catamaran and hoisting it completely out of the water. Multiple thrusters located along the spar allow it to be actively controlled in this configuration. A controller is presented in this work that uses an adaptive feedback algorithm in conjunction with Direct Adaptive Disturbance Rejection (DADR) to mitigate persistent, vortex-induced disturbances. Given the frequency of a disturbance, the nominal DADR scheme adaptively compensates for its unknown amplitude and phase. This algorithm is extended to adapt to a disturbance frequency that is only coarsely known by including a Phase Locked Loop (PLL). The PLL improves the frequency estimate on-line, allowing the modified controller to reduce vortex-induced motions by more than 95% using achievable thrust inputs.

  11. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  12. Combined effect of boundary layer recirculation factor and stable energy on local air quality in the Pearl River Delta over southern China.

    PubMed

    Li, Haowen; Wang, Baomin; Fang, Xingqin; Zhu, Wei; Fan, Qi; Liao, Zhiheng; Liu, Jian; Zhang, Asi; Fan, Shaojia

    2018-03-01

    Atmospheric boundary layer (ABL) has a significant impact on the spatial and temporal distribution of air pollutants. In order to gain a better understanding of how ABL affects the variation of air pollutants, atmospheric boundary layer observations were performed at Sanshui in the Pearl River Delta (PRD) region over southern China during the winter of 2013. Two types of typical ABL status that could lead to air pollution were analyzed comparatively: weak vertical diffusion ability type (WVDAT) and weak horizontal transportation ability type (WHTAT). Results show that (1) WVDAT was featured by moderate wind speed, consistent wind direction, and thick inversion layer at 600~1000 m above ground level (AGL), and air pollutants were restricted in the low altitudes due to the stable atmospheric structure; (2) WHTAT was characterized by calm wind, varied wind direction, and shallow intense ground inversion layer, and air pollutants accumulated in locally because of strong recirculation in the low ABL; (3) recirculation factor (RF) and stable energy (SE) were proved to be good indicators for horizontal transportation ability and vertical diffusion ability of the atmosphere, respectively. Combined utilization of RF and SE can be very helpful in the evaluation of air pollution potential of the ABL. Air quality data from ground and meteorological data collected from radio sounding in Sanshui in the Pearl River Delta showed that local air quality was poor when wind reversal was pronounced or temperature stratification state was stable. The combination of horizontal and vertical transportation ability of the local atmosphere should be taken into consideration when evaluating local environmental bearing capacity for air pollution.

  13. Patterns of range-wide genetic variation in six North American bumble bee (Apidae: Bombus) species.

    PubMed

    Lozier, Jeffrey D; Strange, James P; Stewart, Isaac J; Cameron, Sydney A

    2011-12-01

    The increasing evidence for population declines in bumble bee (Bombus) species worldwide has accelerated research efforts to explain losses in these important pollinators. In North America, a number of once widespread Bombus species have suffered serious reductions in range and abundance, although other species remain healthy. To examine whether declining and stable species exhibit different levels of genetic diversity or population fragmentation, we used microsatellite markers to genotype populations sampled across the geographic distributions of two declining (Bombus occidentalis and Bombus pensylvanicus) and four stable (Bombus bifarius; Bombus vosnesenskii; Bombus impatiens and Bombus bimaculatus) Bombus species. Populations of declining species generally have reduced levels of genetic diversity throughout their range compared to codistributed stable species. Genetic diversity can be affected by overall range size and degree of isolation of local populations, potentially confounding comparisons among species in some cases. We find no evidence for consistent differences in gene flow among stable and declining species, with all species exhibiting weak genetic differentiation over large distances (e.g. >1000 km). Populations on islands and at high elevations experience relatively strong genetic drift, suggesting that some conditions lead to genetic isolation in otherwise weakly differentiated species. B. occidentalis and B. bifarius exhibit stronger genetic differentiation than the other species, indicating greater phylogeographic structure consistent with their broader geographic distributions across topographically complex regions of western North America. Screening genetic diversity in North American Bombus should prove useful for identifying species that warrant monitoring, and developing management strategies that promote high levels of gene flow will be a key component in efforts to maintain healthy populations. © 2011 Blackwell Publishing Ltd.

  14. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  15. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  16. Defining a stable water isotope framework for isotope hydrology application in a large trans-boundary watershed (Russian Federation/Ukraine).

    PubMed

    Vystavna, Yuliya; Diadin, Dmytro; Huneau, Frédéric

    2018-05-01

    Stable isotopes of hydrogen ( 2 H) and oxygen ( 18 O) of the water molecule were used to assess the relationship between precipitation, surface water and groundwater in a large Russia/Ukraine trans-boundary river basin. Precipitation was sampled from November 2013 to February 2015, and surface water and groundwater were sampled during high and low flow in 2014. A local meteoric water line was defined for the Ukrainian part of the basin. The isotopic seasonality in precipitation was evident with depletion in heavy isotopes in November-March and an enrichment in April-October, indicating continental and temperature effects. Surface water was enriched in stable water isotopes from upstream to downstream sites due to progressive evaporation. Stable water isotopes in groundwater indicated that recharge occurs mainly during winter and spring. A one-year data set is probably not sufficient to report the seasonality of groundwater recharge, but this survey can be used to identify the stable water isotopes framework in a weakly gauged basin for further hydrological and geochemical studies.

  17. Hebbian self-organizing integrate-and-fire networks for data clustering.

    PubMed

    Landis, Florian; Ott, Thomas; Stoop, Ruedi

    2010-01-01

    We propose a Hebbian learning-based data clustering algorithm using spiking neurons. The algorithm is capable of distinguishing between clusters and noisy background data and finds an arbitrary number of clusters of arbitrary shape. These properties render the approach particularly useful for visual scene segmentation into arbitrarily shaped homogeneous regions. We present several application examples, and in order to highlight the advantages and the weaknesses of our method, we systematically compare the results with those from standard methods such as the k-means and Ward's linkage clustering. The analysis demonstrates that not only the clustering ability of the proposed algorithm is more powerful than those of the two concurrent methods, the time complexity of the method is also more modest than that of its generally used strongest competitor.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamieson, Kevin; Davis, IV, Warren L.

    Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research. To facilitate the development, testing and deployment of active learning for real applications, we have built an open-source software system for large-scale active learning research and experimentation. The system, called NEXT, provides a unique platform for realworld, reproducible active learning research. This paper details the challenges of building themore » system and demonstrates its capabilities with several experiments. The results show how experimentation can help expose strengths and weaknesses of active learning algorithms, in sometimes unexpected and enlightening ways.« less

  19. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claus, Rene A.; Wang, Yow-Gwo; Wojdyla, Antoine

    Extreme Ultraviolet (EUV) Lithography mask defects were examined on the actinic mask imaging system, SHARP, at Lawrence Berkeley National Laboratory. Also, a quantitative phase retrieval algorithm based on the Weak Object Transfer Function was applied to the measured through-focus aerial images to examine the amplitude and phase of the defects. The accuracy of the algorithm was demonstrated by comparing the results of measurements using a phase contrast zone plate and a standard zone plate. Using partially coherent illumination to measure frequencies that would otherwise fall outside the numerical aperture (NA), it was shown that some defects are smaller than themore » conventional resolution of the microscope. We found that the programmed defects of various sizes were measured and shown to have both an amplitude and a phase component that the algorithm is able to recover.« less

  1. Learning to forget: continual prediction with LSTM.

    PubMed

    Gers, F A; Schmidhuber, J; Cummins, F

    2000-10-01

    Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.

  2. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  3. Simulation of large particle transport near the surface under stable conditions: comparison with the Hanford tracer experiments

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Larson, Timothy

    A plume model is presented describing the downwind transport of large particles (1-100 μm) under stable conditions. The model includes both vertical variations in wind speed and turbulence intensity as well as an algorithm for particle deposition at the surface. Model predictions compare favorably with the Hanford single and dual tracer experiments of crosswind integrated concentration (for particles: relative bias=-0.02 and 0.16, normalized mean square error=0.61 and 0.14, for the single and dual tracer experiments, respectively), whereas the US EPA's fugitive dust model consistently overestimates the observed concentrations at downwind distances beyond several hundred meters (for particles: relative bias=0.31 and 2.26, mean square error=0.42 and 1.71, respectively). For either plume model, the measured ratio of particle to gas concentration is consistently overestimated when using the deposition velocity algorithm of Sehmel and Hodgson (1978. DOE Report PNL-SA-6721, Pacific Northwest Laboratories, Richland, WA). In contrast, these same ratios are predicted with relatively little bias when using the algorithm of Kim et al. (2000. Atmospheric Environment 34 (15), 2387-2397).

  4. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  5. Numerical solution of the Hele-Shaw equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, N.

    1987-04-01

    An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less

  6. Development of cyberblog-based intelligent tutorial system to improve students learning ability algorithm

    NASA Astrophysics Data System (ADS)

    Wahyudin; Riza, L. S.; Putro, B. L.

    2018-05-01

    E-learning as a learning activity conducted online by the students with the usual tools is favoured by students. The use of computer media in learning provides benefits that are not owned by other learning media that is the ability of computers to interact individually with students. But the weakness of many learning media is to assume that all students have a uniform ability, when in reality this is not the case. The concept of Intelligent Tutorial System (ITS) combined with cyberblog application can overcome the weaknesses in neglecting diversity. An Intelligent Tutorial System-based Cyberblog application (ITS) is a web-based interactive application program that implements artificial intelligence which can be used as a learning and evaluation media in the learning process. The use of ITS-based Cyberblog in learning is one of the alternative learning media that is interesting and able to help students in measuring ability in understanding the material. This research will be associated with the improvement of logical thinking ability (logical thinking) of students, especially in algorithm subjects.

  7. The SASS scattering coefficient algorithm. [Seasat-A Satellite Scatterometer

    NASA Technical Reports Server (NTRS)

    Bracalente, E. M.; Grantham, W. L.; Boggs, D. H.; Sweet, J. L.

    1980-01-01

    This paper describes the algorithms used to convert engineering unit data obtained from the Seasat-A satellite scatterometer (SASS) to radar scattering coefficients and associated supporting parameters. A description is given of the instrument receiver and related processing used by the scatterometer to measure signal power backscattered from the earth's surface. The applicable radar equation used for determining scattering coefficient is derived. Sample results of SASS data processed through current algorithm development facility (ADF) scattering coefficient algorithms are presented which include scattering coefficient values for both water and land surfaces. Scattering coefficient signatures for these two surface types are seen to have distinctly different characteristics. Scattering coefficient measurements of the Amazon rain forest indicate the usefulness of this type of data as a stable calibration reference target.

  8. Fuzzy jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less

  9. Fuzzy jets

    DOE PAGES

    Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel; ...

    2016-06-01

    Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less

  10. Kelvin-wave cascade in the vortex filament model

    NASA Astrophysics Data System (ADS)

    Baggaley, Andrew W.; Laurie, Jason

    2014-01-01

    The small-scale energy-transfer mechanism in zero-temperature superfluid turbulence of helium-4 is still a widely debated topic. Currently, the main hypothesis is that weakly nonlinear interacting Kelvin waves (KWs) transfer energy to sufficiently small scales such that energy is dissipated as heat via phonon excitations. Theoretically, there are at least two proposed theories for Kelvin-wave interactions. We perform the most comprehensive numerical simulation of weakly nonlinear interacting KWs to date and show, using a specially designed numerical algorithm incorporating the full Biot-Savart equation, that our results are consistent with the nonlocal six-wave KW interactions as proposed by L'vov and Nazarenko.

  11. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi

    2017-08-01

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.

  12. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less

  13. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  14. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    DOE PAGES

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; ...

    2017-01-20

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less

  15. Proceedings of the Conference on Moments and Signal

    NASA Astrophysics Data System (ADS)

    Purdue, P.; Solomon, H.

    1992-09-01

    The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.

  16. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  17. Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height

    NASA Astrophysics Data System (ADS)

    Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang

    2018-02-01

    This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.

  18. Three-Dimensional Stable Nonorthogonal FDTD Algorithm with Adaptive Mesh Refinement for Solving Maxwell’s Equations

    DTIC Science & Technology

    2013-03-01

    Räisänen. An efficient FDTD algorithm for the analysis of microstrip patch antennas printed on a general anisotropic dielectric substrate. IEEE...applications [3, 21, 22], including antenna , microwave circuits, geophysics, optics, etc. The Ground Penetrating Radar (GPR) is a popular and...IEEE Trans. Antennas Propag., 41:994–999, 1993. 16 [6] S. G. Garcia, T. M. Hung-Bao, R. G. Martin, and B. G. Olmedo. On the application of finite

  19. Positive position control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Baz, A.; Gumusel, L.

    1989-01-01

    The present, simple and accurate position-control algorithm, which is applicable to fast-moving and lightly damped robot arms, is based on the positive position feedback (PPF) strategy and relies solely on position sensors to monitor joint angles of robotic arms to furnish stable position control. The optimized tuned filters, in the form of a set of difference equations, manipulate position signals for robotic system performance. Attention is given to comparisons between this PPF-algorithm controller's experimentally ascertained performance characteristics and those of a conventional proportional controller.

  20. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    DTIC Science & Technology

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  1. Processing and evaluation of riverine waveforms acquired by an experimental bathymetric LiDAR

    NASA Astrophysics Data System (ADS)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2010-12-01

    Accurate mapping of fluvial environments with airborne bathymetric LiDAR is challenged not only by environmental characteristics but also the development and application of software routines to post-process the recorded laser waveforms. During a bathymetric LiDAR survey, the transmission of the green-wavelength laser pulses through the water column is influenced by a number of factors including turbidity, the presence of organic material, and the reflectivity of the streambed. For backscattered laser pulses returned from the river bottom and digitized by the LiDAR detector, post-processing software is needed to interpret and identify distinct inflections in the reflected waveform. Relevant features of this energy signal include the air-water interface, volume reflection from the water column itself, and, ideally, a strong return from the bottom. We discuss our efforts to acquire, analyze, and interpret riverine surveys using the USGS Experimental Advanced Airborne Research LiDAR (EAARL) in a variety of fluvial environments. Initial processing of data collected in the Trinity River, California, using the EAARL Airborne Lidar Processing Software (ALPS) highlighted the difficulty of retrieving a distinct bottom signal in deep pools. Examination of laser waveforms from these pools indicated that weak bottom reflections were often neglected by a trailing edge algorithm used by ALPS to process shallow riverine waveforms. For the Trinity waveforms, this algorithm had a tendency to identify earlier inflections as the bottom, resulting in a shallow bias. Similarly, an EAARL survey along the upper Colorado River, Colorado, also revealed the inadequacy of the trailing edge algorithm for detecting weak bottom reflections. We developed an alternative waveform processing routine by exporting digitized laser waveforms from ALPS, computing the local extrema, and fitting Gaussian curves to the convolved backscatter. Our field data indicate that these techniques improved the definition of pool areas dominated by weak bottom reflections. These processing techniques are also being tested for EAARL surveys collected along the Platte and Klamath Rivers where environmental conditions have resulted in suppressed or convolved bottom reflections.

  2. The impact of human-environment interactions on the stability of forest-grassland mosaic ecosystems

    PubMed Central

    Innes, Clinton; Anand, Madhur; Bauch, Chris T.

    2013-01-01

    Forest-grassland mosaic ecosystems can exhibit alternative stables states, whereby under the same environmental conditions, the ecosystem could equally well reside either in one state or another, depending on the initial conditions. We develop a mathematical model that couples a simplified forest-grassland mosaic model to a dynamic model of opinions about conservation priorities in a population, based on perceptions of ecosystem rarity. Weak human influence increases the region of parameter space where alternative stable states are possible. However, strong human influence precludes bistability, such that forest and grassland either co-exist at a single, stable equilibrium, or their relative abundance oscillates. Moreover, a perturbation can shift the system from a stable state to an oscillatory state. We conclude that human-environment interactions can qualitatively alter the composition of forest-grassland mosaic ecosystems. The human role in such systems should be viewed as dynamic, responsive element rather than as a fixed, unchanging entity. PMID:24048359

  3. Towards Stable CuZnAl Slurry Catalysts for the Synthesis of Ethanol from Syngas

    NASA Astrophysics Data System (ADS)

    Dong, Weibing; Gao, Zhihua; Zhang, Qian; Huang, Wei

    2018-07-01

    A stable CuZnAl slurry catalyst for the synthesis of ethanol from syngas has been developed by adjusting the heat treatment conditions of the complete liquid-phase method. The activity evaluation results showed that the CuZnAl catalyst, when heat-treated under a high pressure and temperature, was a stable catalyst for the synthesis of ethanol. The selectivity of ethanol using the CuZnAl slurry catalyst, which was heat-treated at 553 K under 4.0 MPa, increased continuously with time and was stable at approximately 26.00% after 144 h. The characterization results indicated that the CuZnAl slurry catalyst heat-treated under high pressure conditions could facilitate the formation of a more perfect structure with a larger specific surface area. The prepared catalyst contained a balance of strong and weak acid sites, an appropriate form of Cu2O and a high Cu/Zn atomic ratio at the catalyst surface, providing its stability in ethanol synthesis from syngas.

  4. Energetic Level Scheme of the Stable S=-2 Dihyperon

    NASA Astrophysics Data System (ADS)

    Aslanyan, P. Z.; Shahbazian, B. A.

    2001-11-01

    The quark and soliton Skyrme-type models predict the two different sets of S=-2 stable dibaryon states. The lowest state of the quark model set is an I=0, Jπ = 0+, MH0 < 2MΛ isosinglet dibaryon, whereas that of solyton Skyrme-type model set is an I=1,Jπ = 0, MH-,H0,H+ ≈ 2370 MeV/c2 isotriplet dibaryon. On photographs of the JINR LHE 2m propane bubble chamber exposed to 10 GeV/c proton beam two groups of events interpreted as S=-2 stable dibaryons were observed [1-7]. Quasi-diffractive process plays the decisive role in dihyperon production. The average life for weak decay of stable dihyperons exceeds 3.3 10-10 s. Several Ξ- hyperons have been registrated in these collisions with an effective cross section of 1300-600 nb. The formally estimated effective cross section for H dibaryon production in propane at a momentum of 10 GeV/c is 100 nb.

  5. Estimation of Dry Fracture Weakness, Porosity, and Fluid Modulus Using Observable Seismic Reflection Data in a Gas-Bearing Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2017-05-01

    Fracture detection and fluid identification are important tasks for a fractured reservoir characterization. Our goal is to demonstrate a direct approach to utilize azimuthal seismic data to estimate fluid bulk modulus, porosity, and dry fracture weaknesses, which decreases the uncertainty of fluid identification. Combining Gassmann's (Vier. der Natur. Gesellschaft Zürich 96:1-23, 1951) equations and linear-slip model, we first establish new simplified expressions of stiffness parameters for a gas-bearing saturated fractured rock with low porosity and small fracture density, and then we derive a novel PP-wave reflection coefficient in terms of dry background rock properties (P-wave and S-wave moduli, and density), fracture (dry fracture weaknesses), porosity, and fluid (fluid bulk modulus). A Bayesian Markov chain Monte Carlo nonlinear inversion method is proposed to estimate fluid bulk modulus, porosity, and fracture weaknesses directly from azimuthal seismic data. The inversion method yields reasonable estimates in the case of synthetic data containing a moderate noise and stable results on real data.

  6. Intermediary Variables and Algorithm Parameters for an Electronic Algorithm for Intravenous Insulin Infusion

    PubMed Central

    Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.

    2009-01-01

    Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334

  7. Multiscale description of mercury intrusion curves from an Oxisol and the residual saprolite left after deep profile excavation

    NASA Astrophysics Data System (ADS)

    Vidal Vázquez, Eva; Kitamura, Aline E.; Alves, Marlene C.; Miranda, José G. V.; Paz Ferreiro, Jorge

    2010-05-01

    Oxisols are highly weathered soils with a thick profile that are found primarily in the intertropical regions of the world. Brazilian Oxisols are characterized by 1:1 low activity clays a weak macrostructure and a strong microgranular structure, which results in very stable aggregates (pseudosand) at the

  8. Topological Quantum Information in a 3D Neutral Atom Array

    DTIC Science & Technology

    2015-01-02

    being only weakly in the Lamb-Dicke Figure 1: Projection sideband cooling. The pink curves show the (from left to right) Δn=+1,0,-1 and-2 transitions...45 154012 (2012). 4. C. Knoernschild, T. Kim, P. Maunz, S. G. Crain, and J. Kim, “Stable optical phase modulation with micromirrors ,” Opt. Express

  9. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  10. Stiff, Thermally Stable and Highly Anisotropic Wood-Derived Carbon Composite Monoliths for Electromagnetic Interference Shielding.

    PubMed

    Yuan, Ye; Sun, Xianxian; Yang, Minglong; Xu, Fan; Lin, Zaishan; Zhao, Xu; Ding, Yujie; Li, Jianjun; Yin, Weilong; Peng, Qingyu; He, Xiaodong; Li, Yibin

    2017-06-28

    Electromagnetic interference (EMI) shielding materials for electronic devices in aviation and aerospace not only need lightweight and high shielding effectiveness, but also should withstand harsh environments. Traditional EMI shielding materials often show heavy weight, poor thermal stability, short lifetime, poor tolerance to chemicals, and are hard-to-manufacture. Searching for high-efficiency EMI shielding materials overcoming the above weaknesses is still a great challenge. Herein, inspired by the unique structure of natural wood, lightweight and highly anisotropic wood-derived carbon composite EMI shielding materials have been prepared which possess not only high EMI shielding performance and mechanical stable characteristics, but also possess thermally stable properties, outperforming those metals, conductive polymers, and their composites. The newly developed low-cost materials are promising for specific applications in aerospace electronic devices, especially regarding extreme temperatures.

  11. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  12. Characterization of Weak Convergence of Probability-Valued Solutions of General One-Dimensional Kinetic Equations

    NASA Astrophysics Data System (ADS)

    Perversi, Eleonora; Regazzini, Eugenio

    2015-05-01

    For a general inelastic Kac-like equation recently proposed, this paper studies the long-time behaviour of its probability-valued solution. In particular, the paper provides necessary and sufficient conditions for the initial datum in order that the corresponding solution converges to equilibrium. The proofs rest on the general CLT for independent summands applied to a suitable Skorokhod representation of the original solution evaluated at an increasing and divergent sequence of times. It turns out that, roughly speaking, the initial datum must belong to the standard domain of attraction of a stable law, while the equilibrium is presentable as a mixture of stable laws.

  13. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  14. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  15. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  16. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGES

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  17. Meaningless comparisons lead to false optimism in medical machine learning

    PubMed Central

    Kording, Konrad; Recht, Benjamin

    2017-01-01

    A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood—the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring. PMID:28949964

  18. 3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.

    2016-12-01

    We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.

  19. Matter-wave solitons in nonlinear optical lattices

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Malomed, Boris A.

    2005-10-01

    We introduce a dynamical model of a Bose-Einstein condensate based on the one-dimensional (1D) Gross-Pitaevskii equation (GPE) with a nonlinear optical lattice (NOL), which is represented by the cubic term whose coefficient is periodically modulated in the coordinate. The model describes a situation when the atomic scattering length is spatially modulated, via the optically controlled Feshbach resonance, in an optical lattice created by interference of two laser beams. Relatively narrow solitons supported by the NOL are predicted by means of the variational approximation (VA), and an averaging method is applied to broad solitons. A different feature is a minimum norm (number of atoms), N=Nmin , necessary for the existence of solitons. The VA predicts Nmin very accurately. Numerical results are chiefly presented for the NOL with the zero spatial average value of the nonlinearity coefficient. Solitons with values of the amplitude A larger than at N=Nmin are stable. Unstable solitons with smaller, but not too small, A rearrange themselves into persistent breathers. For still smaller A , the soliton slowly decays into radiation without forming a breather. Broad solitons with very small A are practically stable, as their decay is extremely slow. These broad solitons may freely move across the lattice, featuring quasielastic collisions. Narrow solitons, which are strongly pinned to the NOL, can easily form stable complexes. Finally, the weakly unstable low-amplitude solitons are stabilized if a cubic term with a constant coefficient, corresponding to weak attraction, is included in the GPE.

  20. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  1. An efficient parallel algorithm for the calculation of unrestricted canonical MP2 energies.

    PubMed

    Baker, Jon; Wolinski, Krzysztof

    2011-11-30

    We present details of our efficient implementation of full accuracy unrestricted open-shell second-order canonical Møller-Plesset (MP2) energies, both serial and parallel. The algorithm is based on our previous restricted closed-shell MP2 code using the Saebo-Almlöf direct integral transformation. Depending on system details, UMP2 energies take from less than 1.5 to about 3.0 times as long as a closed-shell RMP2 energy on a similar system using the same algorithm. Several examples are given including timings for some large stable radicals with 90+ atoms and over 3600 basis functions. Copyright © 2011 Wiley Periodicals, Inc.

  2. Traveling Wave Modes of a Plane Layered Anelastic Earth

    DTIC Science & Technology

    2016-05-20

    dependent anelastic moduli. The anelastic moduli must be frequency dependent and satisfy the Kramers- Kronig relations to preserve causality. The...the complex, frequency dependent moduli satisfy the Kramers- Kronig relations. Stable, well-behaved numerical algorithms exist for solving the complex

  3. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  4. Collective phase description of oscillatory convection

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoji; Nakao, Hiroya

    2013-12-01

    We formulate a theory for the collective phase description of oscillatory convection in Hele-Shaw cells. It enables us to describe the dynamics of the oscillatory convection by a single degree of freedom which we call the collective phase. The theory can be considered as a phase reduction method for limit-cycle solutions in infinite-dimensional dynamical systems, namely, stable time-periodic solutions to partial differential equations, representing the oscillatory convection. We derive the phase sensitivity function, which quantifies the phase response of the oscillatory convection to weak perturbations applied at each spatial point, and analyze the phase synchronization between two weakly coupled Hele-Shaw cells exhibiting oscillatory convection on the basis of the derived phase equations.

  5. Mutual Information Item Selection Method in Cognitive Diagnostic Computerized Adaptive Testing with Short Test Length

    ERIC Educational Resources Information Center

    Wang, Chun

    2013-01-01

    Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…

  6. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  7. An effective intervention algorithm for promoting cooperation in the prisoner's dilemma game with multiple stable states

    NASA Astrophysics Data System (ADS)

    Li, Y. S.; Xu, C.; Hui, P. M.

    2018-07-01

    Multiple stable states, hysteresis, sensitivity to initial distributions, and a control algorithm for promoting cooperation are studied in an evolutionary prisoner's dilemma with agents connected into a regular random network. A system could evolve into states of different cooperative frequencies xc in different runs, even starting with the same initial cooperative frequency xc(in) and payoff parameters. For a large reward R, some values of xc(in) either take the system to a group of low cooperative frequency (LCF) states or to a few high cooperative frequency (HCF) states. These states differ by their network structures, with cooperative players connected into ring-like structure in LCF states and compact clusters in HCF states. Hysteresis in xc is observed when R is swept down and up, when the final state of the previous R is used as the initial state of the next R. The analysis led us to propose a closed pack cluster algorithm that gives HCF states effectively. The algorithm intervenes the system at some point in time by selectively switching some non-cooperative D-agents into cooperative C-agents at the peripheral of an existing cluster of C-agents. It ensures protection of a small C-cluster from which more cooperation can be induced. Practically, a governing body may first allow a society to evolve freely and then derive suitable policy to promote selected pockets of good practices for attaining a higher level of common good.

  8. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  9. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  10. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model

    NASA Astrophysics Data System (ADS)

    Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber

    2016-04-01

    In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.

  11. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    PubMed

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  12. Real-time particulate mass measurement based on laser scattering

    NASA Astrophysics Data System (ADS)

    Rentz, Julia H.; Mansur, David; Vaillancourt, Robert; Schundler, Elizabeth; Evans, Thomas

    2005-11-01

    OPTRA has developed a new approach to the determination of particulate size distribution from a measured, composite, laser angular scatter pattern. Drawing from the field of infrared spectroscopy, OPTRA has employed a multicomponent analysis technique which uniquely recognizes patterns associated with each particle size "bin" over a broad range of sizes. The technique is particularly appropriate for overlapping patterns where large signals are potentially obscuring weak ones. OPTRA has also investigated a method for accurately training the algorithms without the use of representative particles for any given application. This streamlined calibration applies a one-time measured "instrument function" to theoretical Mie patterns to create the training data for the algorithms. OPTRA has demonstrated this algorithmic technique on a compact, rugged, laser scatter sensor head we developed for gas turbine engine emissions measurements. The sensor contains a miniature violet solid state laser and an array of silicon photodiodes, both of which are commercial off the shelf. The algorithmic technique can also be used with any commercially available laser scatter system.

  13. Use of Collocated KWAJEX Satellite, Aircraft, and Ground Measurements for Understanding Ambiguities in TRMM Radiometer Rain Profile Algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Fiorino, Steven

    2002-01-01

    Coordinated ground, aircraft, and satellite observations are analyzed from the 1999 TRMM Kwajalein Atoll field experiment (KWAJEX) to better understand the relationships between cloud microphysical processes and microwave radiation intensities in the context of physical evaluation of the Level 2 TRMM radiometer rain profile algorithm and uncertainties with its assumed microphysics-radiation relationships. This talk focuses on the results of a multi-dataset analysis based on measurements from KWAJEX surface, air, and satellite platforms to test the hypothesis that uncertainties in the passive microwave radiometer algorithm (TMI 2a12 in the nomenclature of TRMM) are systematically coupled and correlated with the magnitudes of deviation of the assumed 3-dimensional microphysical properties from observed microphysical properties. Re-stated, this study focuses on identifying the weaknesses in the operational TRMM 2a12 radiometer algorithm based on observed microphysics and radiation data in terms of over-simplifications used in its theoretical microphysical underpinnings. The analysis makes use of a common transform coordinate system derived from the measuring capabilities of the aircraft radiometer used to survey the experimental study area, i.e., the 4-channel AMPR radiometer flown on the NASA DC-8 aircraft. Normalized emission and scattering indices derived from radiometer brightness temperatures at the four measuring frequencies enable a 2-dimensional coordinate system that facilities compositing of Kwajalein S-band ground radar reflectivities, ARMAR Ku-band aircraft radar reflectivities, TMI spacecraft radiometer brightness temperatures, PR Ku-band spacecraft radar reflectivities, bulk microphysical parameters derived from the aircraft-mounted cloud microphysics laser probes (including liquid/ice water contents, effective liquid/ice hydrometeor radii, and effective liquid/ice hydrometeor variances), and rainrates derived from any of the individual ground, aircraft, or satellite algorithms applied to the radar or radiometer measurements, or their combination. The results support the study's underlying hypothesis, particularly in context of ice phase processes, in that the cloud regions where the 2a12 algorithm's microphysical database most misrepresents the microphysical conditions as determined by the laser probes, are where retrieved surface rainrates are most erroneous relative to other reference rainrates as determined by ground and aircraft radar. In reaching these conclusions, TMI and PR brightness temperatures and reflectivities have been synthesized from the aircraft AMPR and ARMAR measurements with the analysis conducted in a composite framework to eliminate measurement noise associated with the case study approach and single element volumes obfuscated by heterogeneous beam filling effects. In diagnosing the performance of the 2a12 algorithm, weaknesses have been found in the cloud-radiation database used to provide microphysical guidance to the algorithm for upper cloud ice microphysics. It is also necessary to adjust a fractional convective rainfall factor within the algorithm somewhat arbitrarily to achieve satisfactory algorithm accuracy.

  14. Personalized recommendation based on unbiased consistence

    NASA Astrophysics Data System (ADS)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  15. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  16. The Design and Implementation of Indoor Localization System Using Magnetic Field Based on Smartphone

    NASA Astrophysics Data System (ADS)

    Liu, J.; Jiang, C.; Shi, Z.

    2017-09-01

    Sufficient signal nodes are mostly required to implement indoor localization in mainstream research. Magnetic field take advantage of high precision, stable and reliability, and the reception of magnetic field signals is reliable and uncomplicated, it could be realized by geomagnetic sensor on smartphone, without external device. After the study of indoor positioning technologies, choose the geomagnetic field data as fingerprints to design an indoor localization system based on smartphone. A localization algorithm that appropriate geomagnetic matching is designed, and present filtering algorithm and algorithm for coordinate conversion. With the implement of plot geomagnetic fingerprints, the indoor positioning of smartphone without depending on external devices can be achieved. Finally, an indoor positioning system which is based on Android platform is successfully designed, through the experiments, proved the capability and effectiveness of indoor localization algorithm.

  17. Predicting warfarin dosage in European–Americans and African–Americans using DNA samples linked to an electronic health record

    PubMed Central

    Ramirez, Andrea H; Shi, Yaping; Schildcrout, Jonathan S; Delaney, Jessica T; Xu, Hua; Oetjens, Matthew T; Zuvich, Rebecca L; Basford, Melissa A; Bowton, Erica; Jiang, Min; Speltz, Peter; Zink, Raquel; Cowan, James; Pulley, Jill M; Ritchie, Marylyn D; Masys, Daniel R; Roden, Dan M; Crawford, Dana C; Denny, Joshua C

    2012-01-01

    Aim Warfarin pharmacogenomic algorithms reduce dosing error, but perform poorly in non-European–Americans. Electronic health record (EHR) systems linked to biobanks may allow for pharmacogenomic analysis, but they have not yet been used for this purpose. Patients & methods We used BioVU, the Vanderbilt EHR-linked DNA repository, to identify European–Americans (n = 1022) and African–Americans (n = 145) on stable warfarin therapy and evaluated the effect of 15 pharmacogenetic variants on stable warfarin dose. Results Associations between variants in VKORC1, CYP2C9 and CYP4F2 with weekly dose were observed in European–Americans as well as additional variants in CYP2C9 and CALU in African–Americans. Compared with traditional 5 mg/day dosing, implementing the US FDA recommendations or the International Warfarin Pharmacogenomics Consortium (IWPC) algorithm reduced error in weekly dose in European–Americans (13.5–12.4 and 9.5 mg/week, respectively) but less so in African–Americans (15.2–15.0 and 13.8 mg/week, respectively). By further incorporating associated variants specific for European–Americans and African–Americans in an expanded algorithm, dose-prediction error reduced to 9.1 mg/week (95% CI: 8.4–9.6) in European–Americans and 12.4 mg/week (95% CI: 10.0–13.2) in African–Americans. The expanded algorithm explained 41 and 53% of dose variation in African–Americans and European–Americans, respectively, compared with 29 and 50%, respectively, for the IWPC algorithm. Implementing these predictions via dispensable pill regimens similarly reduced dosing error. Conclusion These results validate EHR-linked DNA biorepositories as real-world resources for pharmacogenomic validation and discovery. PMID:22329724

  18. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  19. Review of TRMM/GPM Rainfall Algorithm Validation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2004-01-01

    A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.

  20. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

Top