Science.gov

Sample records for algorithm consistently outperforms

  1. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians.

    PubMed

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  2. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    PubMed Central

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  3. Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences.

    PubMed

    Siebert, Matthias; Söding, Johannes

    2016-07-27

    Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k - 1 act as priors for those of order k This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P    =  1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26-101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444

  4. A new graph model and algorithms for consistent superstring problems†

    PubMed Central

    Na, Joong Chae; Cho, Sukhyeun; Choi, Siwon; Kim, Jin Wook; Park, Kunsoo; Sim, Jeong Seop

    2014-01-01

    Problems related to string inclusion and non-inclusion have been vigorously studied in diverse fields such as data compression, molecular biology and computer security. Given a finite set of positive strings and a finite set of negative strings , a string α is a consistent superstring if every positive string is a substring of α and no negative string is a substring of α. The shortest (resp. longest) consistent superstring problem is to find a string α that is the shortest (resp. longest) among all the consistent superstrings for the given sets of strings. In this paper, we first propose a new graph model for consistent superstrings for given and . In our graph model, the set of strings represented by paths satisfying some conditions is the same as the set of consistent superstrings for and . We also present algorithms for the shortest and the longest consistent superstring problems. Our algorithms solve the consistent superstring problems for all cases, including cases that are not considered in previous work. Moreover, our algorithms solve in polynomial time the consistent superstring problems for more cases than the previous algorithms. For the polynomially solvable cases, our algorithms are more efficient than the previous ones. PMID:24751868

  5. A consistent-mode indicator for the eigensystem realization algorithm

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Elliott, Kenny B.; Schenk, Axel

    1992-01-01

    A new method is described for assessing the consistency of model parameters identified with the Eigensystem Realization Algorithm (ERA). Identification results show varying consistency in practice due to many sources, including high modal density, nonlinearity, and inadequate excitation. Consistency is considered to be a reliable indicator of accuracy. The new method is the culmination of many years of experience in developing a practical implementation of the Eigensystem Realization Algorithm. The effectiveness of the method is illustrated using data from NASA Langley's Controls-Structures-Interaction Evolutionary Model.

  6. The strobe algorithms for multi-source warehouse consistency

    SciTech Connect

    Zhuge, Yue; Garcia-Molina, H.; Wiener, J.L.

    1996-12-31

    A warehouse is a data repository containing integrated information for efficient querying and analysis. Maintaining the consistency of warehouse data is challenging, especially if the data sources are autonomous and views of the data at the warehouse span multiple sources. Transactions containing multiple updates at one or more sources, e.g., batch updates, complicate the consistency problem. In this paper we identify and discuss three fundamental transaction processing scenarios for data warehousing. We define four levels of consistency for warehouse data and present a new family of algorithms, the Strobe family, that maintain consistency as the warehouse is updated, under the various warehousing scenarios. All of the algorithms are incremental and can handle a continuous and overlapping stream of updates from the sources. Our implementation shows that the algorithms are practical and realistic choices for a wide variety of update scenarios.

  7. Formal verification of an oral messages algorithm for interactive consistency

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1992-01-01

    The formal specification and verification of an algorithm for Interactive Consistency based on the Oral Messages algorithm for Byzantine Agreement is described. We compare our treatment with that of Bevier and Young, who presented a formal specification and verification for a very similar algorithm. Unlike Bevier and Young, who observed that 'the invariant maintained in the recursive subcases of the algorithm is significantly more complicated than is suggested by the published proof' and who found its formal verification 'a fairly difficult exercise in mechanical theorem proving,' our treatment is very close to the previously published analysis of the algorithm, and our formal specification and verification are straightforward. This example illustrates how delicate choices in the formulation of the problem can have significant impact on the readability of its formal specification and on the tractability of its formal verification.

  8. CD4 Count Outperforms World Health Organization Clinical Algorithm for Point-of Care HIV Diagnosis among Hospitalized HIV-exposed Malawian Infants

    PubMed Central

    Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.

    2014-01-01

    Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543

  9. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  10. A syncopated leap-frog algorithm for orbit consistent plasma simulation of materials processing reactors

    SciTech Connect

    Cobb, J.W.; Leboeuf, J.N.

    1994-10-01

    The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.

  11. NEW MULTICATEGORY BOOSTING ALGORITHMS BASED ON MULTICATEGORY FISHER-CONSISTENT LOSSES

    PubMed Central

    Zou, Hui; Zhu, Ji; Hastie, Trevor

    2016-01-01

    Fisher-consistent loss functions play a fundamental role in the construction of successful binary margin-based classifiers. In this paper we establish the Fisher-consistency condition for multicategory classification problems. Our approach uses the margin vector concept which can be regarded as a multicategory generalization of the binary margin. We characterize a wide class of smooth convex loss functions that are Fisher-consistent for multicategory classification. We then consider using the margin-vector-based loss functions to derive multicategory boosting algorithms. In particular, we derive two new multicategory boosting algorithms by using the exponential and logistic regression losses.

  12. A formally verified algorithm for interactive consistency under a hybrid fault model

    NASA Technical Reports Server (NTRS)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  13. A subgroup algorithm to identify cross-rotation peaks consistent with non-crystallographic symmetry.

    PubMed

    Lilien, Ryan H; Bailey-Kellogg, Chris; Anderson, Amy C; Donald, Bruce R

    2004-06-01

    Molecular replacement (MR) often plays a prominent role in determining initial phase angles for structure determination by X-ray crystallography. In this paper, an efficient quaternion-based algorithm is presented for analyzing peaks from a cross-rotation function in order to identify model orientations consistent with proper non-crystallographic symmetry (NCS) and to generate proper NCS-consistent orientations missing from the list of cross-rotation peaks. The algorithm, CRANS, analyzes the rotation differences between each pair of cross-rotation peaks to identify finite subgroups. Sets of rotation differences satisfying the subgroup axioms correspond to orientations compatible with the correct proper NCS. The CRANS algorithm was first tested using cross-rotation peaks computed from structure-factor data for three test systems and was then used to assist in the de novo structure determination of dihydrofolate reductase-thymidylate synthase (DHFR-TS) from Cryptosporidium hominis. In every case, the CRANS algorithm runs in seconds to identify orientations consistent with the observed proper NCS and to generate missing orientations not present in the cross-rotation peak list. The CRANS algorithm has application in every molecular-replacement phasing effort with proper NCS. PMID:15159565

  14. Two Vectorized Algorithms for the Effective Calculation of Mass-Consistent Flow Fields.

    NASA Astrophysics Data System (ADS)

    Moussiopoulos, N.; Flassak, Th.

    1986-06-01

    The purpose of this paper is the calculation of mass-consistent wind velocity fields over complex orography on the basis of existing measurements. Measured data are used to generate an initial wind velocity field that in general does not satisfy continuity. For the adjustment of this velocity field a three-dimensional elliptic differential equation is solved. A transformation of this equation to a terrain-following coordinate system ensures the proper consideration of the orography. Two numerical algorithms for the solution of the transformed equation are presented. One algorithm makes use of a fast direct elliptic solver based on Fourier analysis, the other utilizes the red-black SOR method. Both algorithms achieve full vectorization on computers like the CYBER 205. A test problem is defined to compare the two algorithms with regard to the computing time: In the case of small terrain roughness, the algorithm using the fast direct elliptic solver is recommended, in the opposite case the red-black SOR method. Adjusted mass- consistent wind fields are presented for the Athens basin. The results are discussed in view of the elevated pollution levels in Athens; they are in good agreement with observations.

  15. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    NASA Astrophysics Data System (ADS)

    Tretiak, Sergei; Isborn, Christine M.; Niklasson, Anders M. N.; Challacombe, Matt

    2009-02-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  16. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    SciTech Connect

    Tretiak, Sergei

    2008-01-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  17. A JFNK-based implicit moment algorithm for self-consistent, multi-scale, plasma simulation

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Taitano, William; Chacon, Luis

    2010-11-01

    Jacobian-Free-Newton-Krylov method (JFNK) is an advanced non-linear algorithm that allows solution to a coupled systems of non-linear equations [1]. In [2] we have put forward a JFNK-based implicit, consistent, time integration algorithm and demonstrated it's ability to efficiently step over electron time scales, while retaining electron kinetic effects on the ion time scale. Here we extend this work by investigating a JFNK- based implicit-moments approach for the purpose of consistent scale-bridging between the fluid description and kinetic description in order to resolve the transition region. Our preliminary results, based on a reformulated Poisson's equation (RPE) [3], allows solution to the Vlasov-Poisson system for varying grid resolutions. In the limit of local coarse grid size (grid spacing large compared to Debye length), the RPE represents an electric field based on the moment system, while in the limit of local grid spacing resolving the Debye length, the RPE represents an electric field based on the standard Poisson equation. The technique allows smooth transition between the two regimes, consistently, in one simulation. [1] D.A. Knoll and D.E. Keyes,J. Comput. Phys., vol. 193 (2004) [2] W.T. Taitano, Masters Thesis, Nuclear Engineering, University of Idaho (2010) [3] R. Belaouar, N.Crouseilles and P. Degond,J. Sci. Comput., vol. 41 (2009)

  18. Consistent satellite XCO2 retrievals from SCIAMACHY and GOSAT using the BESD algorithm

    DOE PAGESBeta

    Heymann, J.; Reuter, M.; Hilker, M.; Buchwitz, M.; Schneising, O.; Bovensmann, H.; Burrows, J. P.; Kuze, A.; Suto, H.; Deutscher, N. M.; et al

    2015-02-13

    Consistent and accurate long-term data sets of global atmospheric concentrations of carbon dioxide (CO2) are required for carbon cycle and climate related research. However, global data sets based on satellite observations may suffer from inconsistencies originating from the use of products derived from different satellites as needed to cover a long enough time period. One reason for inconsistencies can be the use of different retrieval algorithms. We address this potential issue by applying the same algorithm, the Bremen Optimal Estimation DOAS (BESD) algorithm, to different satellite instruments, SCIAMACHY on-board ENVISAT (March 2002–April 2012) and TANSO-FTS on-board GOSAT (launched in Januarymore » 2009), to retrieve XCO2, the column-averaged dry-air mole fraction of CO2. BESD has been initially developed for SCIAMACHY XCO2 retrievals. Here, we present the first detailed assessment of the new GOSAT BESD XCO2 product. GOSAT BESD XCO2 is a product generated and delivered to the MACC project for assimilation into ECMWF's Integrated Forecasting System (IFS). We describe the modifications of the BESD algorithm needed in order to retrieve XCO2 from GOSAT and present detailed comparisons with ground-based observations of XCO2 from the Total Carbon Column Observing Network (TCCON). We discuss detailed comparison results between all three XCO2 data sets (SCIAMACHY, GOSAT and TCCON). The comparison results demonstrate the good consistency between the SCIAMACHY and the GOSAT XCO2. For example, we found a mean difference for daily averages of −0.60 ± 1.56 ppm (mean difference ± standard deviation) for GOSAT-SCIAMACHY (linear correlation coefficient r = 0.82), −0.34 ± 1.37 ppm (r = 0.86) for GOSAT-TCCON and 0.10 ± 1.79 ppm (r = 0.75) for SCIAMACHY-TCCON. The remaining differences between GOSAT and SCIAMACHY are likely due to non-perfect collocation (±2 h, 10° × 10° around TCCON sites), i.e., the observed air masses are not exactly identical, but likely also

  19. Thermodynamically Consistent Physical Formulation and an Efficient Numerical Algorithm for Incompressible N-Phase Flows

    NASA Astrophysics Data System (ADS)

    Dong, Suchuan

    2015-11-01

    This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.

  20. Consistent satellite XCO2 retrievals from SCIAMACHY and GOSAT using the BESD algorithm

    SciTech Connect

    Heymann, J.; Reuter, M.; Hilker, M.; Buchwitz, M.; Schneising, O.; Bovensmann, H.; Burrows, J. P.; Kuze, A.; Suto, H.; Deutscher, N. M.; Dubey, M. K.; Griffith, D. W. T.; Hase, F.; Kawakami, S.; Kivi, R.; Morino, I.; Petri, C.; Roehl, C.; Schneider, M.; Sherlock, V.; Sussmann, R.; Velazco, V. A.; Warneke, T.; Wunch, D.

    2015-02-13

    Consistent and accurate long-term data sets of global atmospheric concentrations of carbon dioxide (CO2) are required for carbon cycle and climate related research. However, global data sets based on satellite observations may suffer from inconsistencies originating from the use of products derived from different satellites as needed to cover a long enough time period. One reason for inconsistencies can be the use of different retrieval algorithms. We address this potential issue by applying the same algorithm, the Bremen Optimal Estimation DOAS (BESD) algorithm, to different satellite instruments, SCIAMACHY on-board ENVISAT (March 2002–April 2012) and TANSO-FTS on-board GOSAT (launched in January 2009), to retrieve XCO2, the column-averaged dry-air mole fraction of CO2. BESD has been initially developed for SCIAMACHY XCO2 retrievals. Here, we present the first detailed assessment of the new GOSAT BESD XCO2 product. GOSAT BESD XCO2 is a product generated and delivered to the MACC project for assimilation into ECMWF's Integrated Forecasting System (IFS). We describe the modifications of the BESD algorithm needed in order to retrieve XCO2 from GOSAT and present detailed comparisons with ground-based observations of XCO2 from the Total Carbon Column Observing Network (TCCON). We discuss detailed comparison results between all three XCO2 data sets (SCIAMACHY, GOSAT and TCCON). The comparison results demonstrate the good consistency between the SCIAMACHY and the GOSAT XCO2. For example, we found a mean difference for daily averages of −0.60 ± 1.56 ppm (mean difference ± standard deviation) for GOSAT-SCIAMACHY (linear correlation coefficient r = 0.82), −0.34 ± 1.37 ppm (r = 0.86) for GOSAT-TCCON and 0.10 ± 1.79 ppm (r = 0.75) for SCIAMACHY-TCCON. The remaining differences between GOSAT and SCIAMACHY are likely due to non

  1. A Self Consistent Multiprocessor Space Charge Algorithm that is Almost Embarrassingly Parallel

    SciTech Connect

    Edward Nissen, B. Erdelyi, S.L. Manikonda

    2012-07-01

    We present a space charge code that is self consistent, massively parallelizeable, and requires very little communication between computer nodes; making the calculation almost embarrassingly parallel. This method is implemented in the code COSY Infinity where the differential algebras used in this code are important to the algorithm's proper functioning. The method works by calculating the self consistent space charge distribution using the statistical moments of the test particles, and converting them into polynomial series coefficients. These coefficients are combined with differential algebraic integrals to form the potential, and electric fields. The result is a map which contains the effects of space charge. This method allows for massive parallelization since its statistics based solver doesn't require any binning of particles, and only requires a vector containing the partial sums of the statistical moments for the different nodes to be passed. All other calculations are done independently. The resulting maps can be used to analyze the system using normal form analysis, as well as advance particles in numbers and at speeds that were previously impossible.

  2. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    PubMed

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion. PMID:26580754

  3. Stochastic algorithm for size-extensive vibrational self-consistent field methods on fully anharmonic potential energy surfaces.

    PubMed

    Hermes, Matthew R; Hirata, So

    2014-12-28

    A stochastic algorithm based on Metropolis Monte Carlo (MC) is presented for the size-extensive vibrational self-consistent field methods (XVSCF(n) and XVSCF[n]) for anharmonic molecular vibrations. The new MC-XVSCF methods substitute stochastic evaluations of a small number of high-dimensional integrals of functions of the potential energy surface (PES), which is sampled on demand, for diagrammatic equations involving high-order anharmonic force constants. This algorithm obviates the need to evaluate and store any high-dimensional partial derivatives of the potential and can be applied to the fully anharmonic PES without any Taylor-series approximation in an intrinsically parallelizable algorithm. The MC-XVSCF methods reproduce deterministic XVSCF calculations on the same Taylor-series PES in all energies, frequencies, and geometries. Calculations using the fully anharmonic PES evaluated on the fly with electronic structure methods report anharmonic effects on frequencies and geometries of much greater magnitude than deterministic XVSCF calculations, reflecting an underestimation of anharmonic effects in a Taylor-series approximation to the PES. PMID:25554137

  4. Stochastic algorithm for size-extensive vibrational self-consistent field methods on fully anharmonic potential energy surfaces

    SciTech Connect

    Hermes, Matthew R.; Hirata, So

    2014-12-28

    A stochastic algorithm based on Metropolis Monte Carlo (MC) is presented for the size-extensive vibrational self-consistent field methods (XVSCF(n) and XVSCF[n]) for anharmonic molecular vibrations. The new MC-XVSCF methods substitute stochastic evaluations of a small number of high-dimensional integrals of functions of the potential energy surface (PES), which is sampled on demand, for diagrammatic equations involving high-order anharmonic force constants. This algorithm obviates the need to evaluate and store any high-dimensional partial derivatives of the potential and can be applied to the fully anharmonic PES without any Taylor-series approximation in an intrinsically parallelizable algorithm. The MC-XVSCF methods reproduce deterministic XVSCF calculations on the same Taylor-series PES in all energies, frequencies, and geometries. Calculations using the fully anharmonic PES evaluated on the fly with electronic structure methods report anharmonic effects on frequencies and geometries of much greater magnitude than deterministic XVSCF calculations, reflecting an underestimation of anharmonic effects in a Taylor-series approximation to the PES.

  5. Personalized recommendation based on unbiased consistence

    NASA Astrophysics Data System (ADS)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  6. Why Do Chinese-Australian Students Outperform Their Australian Peers in Mathematics: A Comparative Case Study

    ERIC Educational Resources Information Center

    Zhao, Dacheng; Singh, Michael

    2011-01-01

    International comparative studies and cross-cultural studies of mathematics achievement indicate that Chinese students (whether living in or outside China) consistently outperform their Western counterparts. This study shows that the gap between Chinese-Australian and other Australian students is best explained by differences in motivation to…

  7. Towards a long-term global aerosol optical depth record: applying a consistent aerosol retrieval algorithm to MODIS and VIIRS-observed reflectance

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.

    2015-07-01

    To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångstrom Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in

  8. Towards a long-term global aerosol optical depth record: applying a consistent aerosol retrieval algorithm to MODIS and VIIRS-observed reflectance

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.

    2015-10-01

    To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångström Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in

  9. Implicit and explicit schemes for mass consistency preservation in hybrid particle/finite-volume algorithms for turbulent reactive flows

    SciTech Connect

    Popov, Pavel P. Pope, Stephen B.

    2014-01-15

    This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes.

  10. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  11. Description of nuclear systems with a self-consistent configuration-mixing approach: Theory, algorithm, and application to the 12C test nucleus

    NASA Astrophysics Data System (ADS)

    Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.

    2016-02-01

    Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As

  12. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  13. Extortion can outperform generosity in the iterated prisoner's dilemma.

    PubMed

    Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W; Zheng, Jie; Xu, Bin

    2016-01-01

    Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513

  14. Dodecylresorufin (C12R) Outperforms Resorufin in Microdroplet Bacterial Assays.

    PubMed

    Scheler, Ott; Kaminski, Tomasz S; Ruszczak, Artur; Garstecki, Piotr

    2016-05-11

    This paper proves that dodecylresorufin (C12R) outperforms resorufin (the conventional form of this dye) in droplet microfluidic bacterial assays. Resorufin is a marker dye that is widely used in different fields of microbiology and has increasingly been applied in droplet microfluidic assays and experiments. The main concern associated with resorufin in droplet-based systems is dye leakage into the oil phase and neighboring droplets. The leakage decreases the performance of assays because it causes averaging of the signal between the positive (bacteria-containing) and negative (empty) droplets. Here we show that C12R is a promising alternative to conventional resorufin because it maintains higher sensitivity, specificity, and signal-to-noise ratio over time. These characteristics make C12R a suitable reagent for droplet digital assays and for monitoring of microbial growth in droplets. PMID:27100211

  15. Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes.

    PubMed

    Knaus, Tanja; Paul, Caroline E; Levy, Colin W; de Vries, Simon; Mutti, Francesco G; Hollmann, Frank; Scrutton, Nigel S

    2016-01-27

    The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the "ene" reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. "Better-than-Nature" biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612

  16. Extortion can outperform generosity in the iterated prisoner's dilemma

    PubMed Central

    Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W.; Zheng, Jie; Xu, Bin

    2016-01-01

    Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513

  17. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets

    PubMed Central

    Lewinski, Peter

    2015-01-01

    Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761

  18. Smiling on the Inside: The Social Benefits of Suppressing Positive Emotions in Outperformance Situations.

    PubMed

    Schall, Marina; Martiny, Sarah E; Goetz, Thomas; Hall, Nathan C

    2016-05-01

    Although expressing positive emotions is typically socially rewarded, in the present work, we predicted that people suppress positive emotions and thereby experience social benefits when outperformed others are present. We tested our predictions in three experimental studies with high school students. In Studies 1 and 2, we manipulated the type of social situation (outperformance vs. non-outperformance) and assessed suppression of positive emotions. In both studies, individuals reported suppressing positive emotions more in outperformance situations than in non-outperformance situations. In Study 3, we manipulated the social situation (outperformance vs. non-outperformance) as well as the videotaped person's expression of positive emotions (suppression vs. expression). The findings showed that when outperforming others, individuals were indeed evaluated more positively when they suppressed rather than expressed their positive emotions, and demonstrate the importance of the specific social situation with respect to the effects of suppression. PMID:27029576

  19. Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes

    PubMed Central

    2016-01-01

    The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the “ene” reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. “Better-than-Nature” biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612

  20. Adult vultures outperform juveniles in challenging thermal soaring conditions

    PubMed Central

    Harel, Roi; Horvitz, Nir; Nathan, Ran

    2016-01-01

    Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures’ tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590

  1. Adult vultures outperform juveniles in challenging thermal soaring conditions.

    PubMed

    Harel, Roi; Horvitz, Nir; Nathan, Ran

    2016-01-01

    Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures' tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590

  2. Digital image analysis outperforms manual biomarker assessment in breast cancer.

    PubMed

    Stålhammar, Gustav; Fuentes Martinez, Nelson; Lippert, Michael; Tobin, Nicholas P; Mølholm, Ida; Kis, Lorand; Rosin, Gustaf; Rantalainen, Mattias; Pedersen, Lars; Bergh, Jonas; Grunkin, Michael; Hartman, Johan

    2016-04-01

    In the spectrum of breast cancers, categorization according to the four gene expression-based subtypes 'Luminal A,' 'Luminal B,' 'HER2-enriched,' and 'Basal-like' is the method of choice for prognostic and predictive value. As gene expression assays are not yet universally available, routine immunohistochemical stains act as surrogate markers for these subtypes. Thus, congruence of surrogate markers and gene expression tests is of utmost importance. In this study, 3 cohorts of primary breast cancer specimens (total n=436) with up to 28 years of survival data were scored for Ki67, ER, PR, and HER2 status manually and by digital image analysis (DIA). The results were then compared for sensitivity and specificity for the Luminal B subtype, concordance to PAM50 assays in subtype classification and prognostic power. The DIA system used was the Visiopharm Integrator System. DIA outperformed manual scoring in terms of sensitivity and specificity for the Luminal B subtype, widely considered the most challenging distinction in surrogate subclassification, and produced slightly better concordance and Cohen's κ agreement with PAM50 gene expression assays. Manual biomarker scores and DIA essentially matched each other for Cox regression hazard ratios for all-cause mortality. When the Nottingham combined histologic grade (Elston-Ellis) was used as a prognostic surrogate, stronger Spearman's rank-order correlations were produced by DIA. Prognostic value of Ki67 scores in terms of likelihood ratio χ(2) (LR χ(2)) was higher for DIA that also added significantly more prognostic information to the manual scores (LR-Δχ(2)). In conclusion, the system for DIA evaluated here was in most aspects a superior alternative to manual biomarker scoring. It also has the potential to reduce time consumption for pathologists, as many of the steps in the workflow are either automatic or feasible to manage without pathological expertise. PMID:26916072

  3. Joint optimization of algorithmic suites for EEG analysis.

    PubMed

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621

  4. Lazy arc consistency

    SciTech Connect

    Schiex, T.; Gaspin, C.; Regin, J.C.; Verfaillie, G.

    1996-12-31

    Arc consistency filtering is widely used in the framework of binary constraint satisfaction problems: with a low complexity, inconsistency may be detected and domains are filtered. In this paper, we show that when detecting inconsistency is the objective, a systematic domain filtering is useless and a lazy approach is more adequate. Whereas usual arc consistency algorithms produce the maximum arc consistent sub-domain, when it exists, we propose a method, called LAC{tau}, which only looks for any arc consistent sub-domain. The algorithm is then extended to provide the additional service of locating one variable with a minimum domain cardinality in the maximum arc consistent sub-domain, without necessarily computing all domain sizes. Finally, we compare traditional AC enforcing and lazy AC enforcing using several benchmark problems, both randomly generated CSP and real life problems.

  5. Do new wipe materials outperform traditional lead dust cleaning methods?

    PubMed

    Lewis, Roger D; Ong, Kee Hean; Emo, Brett; Kennedy, Jason; Brown, Christopher A; Condoor, Sridhar; Thummalakunta, Laxmi

    2012-01-01

    traditional methods (vacuuming and wet wiping) was greater and more consistent compared to the new methods (electrostatic dry cloth and wet Swiffer mop). Vacuuming and wet wiping achieved lead reductions of 92% ± 4% and 91%, ± 4%, respectively, while the electrostatic dry cloth and wet Swiffer mops achieved lead reductions of only 89 ± 8% and  81 ± 17%, respectively. PMID:22746281

  6. Pattern recognition control outperforms conventional myoelectric control in upper limb patients with targeted muscle reinnervation.

    PubMed

    Hargrove, Levi J; Lock, Blair A; Simon, Ann M

    2013-01-01

    Pattern recognition myoelectric control shows great promise as an alternative to conventional amplitude based control to control multiple degree of freedom prosthetic limbs. Many studies have reported pattern recognition classification error performances of less than 10% during offline tests; however, it remains unclear how this translates to real-time control performance. In this contribution, we compare the real-time control performances between pattern recognition and direct myoelectric control (a popular form of conventional amplitude control) for participants who had received targeted muscle reinnervation. The real-time performance was evaluated during three tasks; 1) a box and blocks task, 2) a clothespin relocation task, and 3) a block stacking task. Our results found that pattern recognition significantly outperformed direct control for all three performance tasks. Furthermore, it was found that pattern recognition was configured much quicker. The classification error of the pattern recognition systems used by the patients was found to be 16% ±(1.6%) suggesting that systems with this error rate may still provide excellent control. Finally, patients qualitatively preferred using pattern recognition control and reported the resulting control to be smoother and more consistent. PMID:24110008

  7. Surface hopping outperforms secular Redfield theory when reorganization energies range from small to moderate (and nuclei are classical)

    SciTech Connect

    Landry, Brian R. Subotnik, Joseph E.

    2015-03-14

    We evaluate the accuracy of Tully’s surface hopping algorithm for the spin-boson model in the limit of small to moderate reorganization energy. We calculate transition rates between diabatic surfaces in the exciton basis and compare against exact results from the hierarchical equations of motion; we also compare against approximate rates from the secular Redfield equation and Ehrenfest dynamics. We show that decoherence-corrected surface hopping performs very well in this regime, agreeing with secular Redfield theory for very weak system-bath coupling and outperforming secular Redfield theory for moderate system-bath coupling. Surface hopping can also be extended beyond the Markovian limits of standard Redfield theory. Given previous work [B. R. Landry and J. E. Subotnik, J. Chem. Phys. 137, 22A513 (2012)] that establishes the accuracy of decoherence-corrected surface-hopping in the Marcus regime, this work suggests that surface hopping may well have a very wide range of applicability.

  8. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  9. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  10. PhyPA: Phylogenetic method with pairwise sequence alignment outperforms likelihood methods in phylogenetics involving highly diverged sequences.

    PubMed

    Xia, Xuhua

    2016-09-01

    While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. PMID:27377322

  11. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  12. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    NASA Astrophysics Data System (ADS)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  13. Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude

    SciTech Connect

    Stockinger, Kurt; Rotem, Doron; Shoshani, Arie; Wu, Kesheng

    2006-01-28

    FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.

  14. The ontogeny of human point following in dogs: When younger dogs outperform older.

    PubMed

    Zaine, Isabela; Domeniconi, Camila; Wynne, Clive D L

    2015-10-01

    We investigated puppies' responsiveness to hand points differing in salience. Experiment 1 compared performance of younger (8 weeks old) and older (12 weeks) shelter pups in following pointing gestures. We hypothesized that older puppies would show better performance. Both groups followed the easy and moderate but not the difficult pointing cues. Surprisingly, the younger pups outperformed the older ones in following the moderate and difficult points. Investigation of subjects' backgrounds revealed that significantly more younger pups had experience living in human homes than did the older pups. Thus, we conducted a second experiment to isolate the variable experience. We collected additional data from older pet pups living in human homes on the same three point types and compared their performance with the shelter pups from Experiment 1. The pups living in homes accurately followed all three pointing cues. When comparing both experienced groups, the older pet pups outperformed the younger shelter ones, as predicted. When comparing the two same-age groups differing in background experience, the pups living in homes outperformed the shelter pups. A significant correlation between experience with humans and success in following less salient cues was found. The importance of ontogenetic learning in puppies' responsiveness to certain human social cues is discussed. PMID:26192336

  15. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB. PMID:27410549

  16. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  17. Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid

    PubMed Central

    Grewell, Brenda J.; Skaer Thomason, Meghan J.; Futrell, Caryn J.; Iannucci, Maria; Drenovsky, Rebecca E.

    2016-01-01

    Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139

  18. Being outperformed in an intergroup context: the relationship between group status and self-protective strategies.

    PubMed

    Redersdorff, Sandrine; Martinot, Delphine

    2009-06-01

    The present study examines the effects of group status on self-esteem when individuals are outperformed by an in-group target (Experiments 1 and 2) or an out-group (Experiment 2). The main aim was to examine different self-protective mechanisms when the current standing of the in-group vis-à-vis another group is either unfavourable (low status) or favourable (high status). Experiment 1 showed that when outperformed by an in-group target, the members of a low status group reported higher self-esteem than members of a high status group. Moreover, this effect was mediated by group identification. Experiment 2 replicated the previous results and gave rise to similar effects on investment in the group. The perceived relevance of the comparison group appeared to protect the self-esteem of high status group members. This research demonstrates the mediating role of self-protection mechanisms such as group identification and the perceived relevance of a comparison group. PMID:18922208

  19. Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid.

    PubMed

    Grewell, Brenda J; Skaer Thomason, Meghan J; Futrell, Caryn J; Iannucci, Maria; Drenovsky, Rebecca E

    2016-01-01

    Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139

  20. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    NASA Astrophysics Data System (ADS)

    Joung, JinWook; Smyth, Andrew W.; Chung, Lan

    2010-06-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement-disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off2 algorithms. The AID-off2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement-disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off2 algorithm outperforms the AID-off algorithm in reducing the number of switching times.

  1. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  2. Low-Friction Minilaparoscopy Outperforms Regular 5-mm and 3-mm Instruments for Precise Tasks

    PubMed Central

    Firme, Wood A.; Lima, Diego L.; de Paula Lopes, Vladmir Goldstein; Montandon, Isabelle D.; Filho, Flavio Santos; Shadduck, Phillip P.

    2015-01-01

    Background and Objectives: Therapeutic laparoscopy was incorporated into surgical practice more than 25 y ago. Several modifications have since been developed to further minimize surgical trauma and improve results. Minilaparoscopy, performed with 2- to 3-mm instruments was introduced in the mid 1990s but failed to attain mainstream use, mostly because of the limitations of the early devices. Buoyed by a renewed interest, new generations of mini instruments are being developed with improved functionality and durability. This study is an objective evaluation of a new set of mini instruments with a novel low-friction design. Method: Twenty-two medical students and 22 surgical residents served as study participants. Three designs of laparoscopic instruments were evaluated: conventional 5 mm, traditional 3 mm, and low-friction 3 mm. The instruments were evaluated with a standard surgical simulator, emulating 4 exercises of various complexities, testing grasping, precise 2-handed movements, and suturing. The metric measured was time to task completion, with 5 replicates for every combination of instrument–exercise–participant. Results: For all 4 tasks, the instrument design that performed the best was the same in both the medical student and surgical resident groups. For the gross-grasping task, the 5-mm conventional instruments performed best, followed by the low-friction mini instruments. For the 3 more complex and precise tasks, the low-friction mini instruments outperformed both of the other instrument designs. Conclusion: In standard surgical simulator exercises, low-friction minilaparoscopic instruments outperformed both conventional 3- and 5-mm laparoscopic instruments for precise tasks. PMID:26390530

  3. Success on Algorithmic and LOCS vs. Conceptual Chemistry Exam Questions

    NASA Astrophysics Data System (ADS)

    Zoller, Uri; Lubezky, Aviva; Nakhleh, Mary B.; Tessier, Barbara; Dori, Yehudit J.

    1995-11-01

    The performance of freshman science, engineering, and in-service teacher students in three Israeli and American universities on algorithmic, lower-order cognitive skills (LOCS), and conceptual chemistry exam questions was investigated. The driving force for the study was an interest in moving chemistry instruction from an algorithm-oriented factual recall approach dominated by LOCS to a decision-making, problem-solving, and critical thinking approach dominated by higher-order cognitive skills (HOCS). Students' responses to the specially designed algorithmic, LOCS, and conceptual exam questions were scored and analyzed for correlations and for differences between the means within and across universities by the question's category. The main findings were: (1) students in all three universities performed consistently on each of the three categories in the order of algorithmic > LOCS > conceptual questions, (2) success on algorithmic does not imply success on conceptual, or even on LOCS questions, and (3) students taught in small classes outperformed by far those in large lecture sessions in all three categories. The implied paradigm shift from an algorithmic/LOCS to a conceptual/HOCS orientation should be moved from a research-based theoretical domain to actual implementation in order for a meaningful improvement of chemistry teaching to occur.

  4. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  5. The Consistent Vehicle Routing Problem

    SciTech Connect

    Groer, Christopher S; Golden, Bruce; Edward, Wasil

    2009-01-01

    In the small package shipping industry (as in other industries), companies try to differentiate themselves by providing high levels of customer service. This can be accomplished in several ways, including online tracking of packages, ensuring on-time delivery, and offering residential pickups. Some companies want their drivers to develop relationships with customers on a route and have the same drivers visit the same customers at roughly the same time on each day that the customers need service. These service requirements, together with traditional constraints on vehicle capacity and route length, define a variant of the classical capacitated vehicle routing problem, which we call the consistent VRP (ConVRP). In this paper, we formulate the problem as a mixed-integer program and develop an algorithm to solve the ConVRP that is based on the record-to-record travel algorithm. We compare the performance of our algorithm to the optimal mixed-integer program solutions for a set of small problems and then apply our algorithm to five simulated data sets with 1,000 customers and a real-world data set with more than 3,700 customers. We provide a technique for generating ConVRP benchmark problems from vehicle routing problem instances given in the literature and provide our solutions to these instances. The solutions produced by our algorithm on all problems do a very good job of meeting customer service objectives with routes that have a low total travel time.

  6. Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras.

    PubMed

    Tao, Michael W; Su, Jong-Chyi; Wang, Ting-Chun; Malik, Jitendra; Ramamoorthi, Ravi

    2016-06-01

    Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photoconsistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras. PMID:26372203

  7. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  8. A paclitaxel-loaded recombinant polypeptide nanoparticle outperforms Abraxane in multiple murine cancer models

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh

    2015-08-01

    Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumour-specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60 nm near-monodisperse nanoparticles that increased the systemic exposure of PTX by sevenfold compared with free drug and twofold compared with the Food and Drug Administration-approved taxane nanoformulation (Abraxane). The tumour uptake of the CP-PTX nanoparticle was fivefold greater than free drug and twofold greater than Abraxane. In a murine cancer model of human triple-negative breast cancer and prostate cancer, CP-PTX induced near-complete tumour regression after a single dose in both tumour models, whereas at the same dose, no mice treated with Abraxane survived for >80 days (breast) and 60 days (prostate), respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for PTX delivery.

  9. Collective Intelligence Meets Medical Decision-Making: The Collective Outperforms the Best Radiologist

    PubMed Central

    Wolf, Max; Krause, Jens; Carney, Patricia A.; Bogart, Andy; Kurvers, Ralf H. J. M.

    2015-01-01

    While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331

  10. Delignification outperforms alkaline extraction for xylan fingerprinting of oil palm empty fruit bunch.

    PubMed

    Murciano Martínez, Patricia; Kabel, Mirjam A; Gruppen, Harry

    2016-11-20

    Enzyme hydrolysed (hemi-)celluloses from oil palm empty fruit bunches (EFBs) are a source for production of bio-fuels or chemicals. In this study, after either peracetic acid delignification or alkaline extraction, EFB hemicellulose structures were described, aided by xylanase hydrolysis. Delignification of EFB facilitated the hydrolysis of EFB-xylan by a pure endo-β-1,4-xylanase. Up to 91% (w/w) of the non-extracted xylan in the delignified EFB was hydrolysed compared to less than 4% (w/w) of that in untreated EFB. Alkaline extraction of EFB, without prior delignification, yielded only 50% of the xylan. The xylan obtained was hydrolysed only for 40% by the endo-xylanase used. Hence, delignification alone outperformed alkaline extraction as pretreatment for enzymatic fingerprinting of EFB xylans. From the analysis of the oligosaccharide-fingerprint of the delignified endo-xylanase hydrolysed EFB xylan, the structure was proposed as acetylated 4-O-methylglucuronoarabinoxylan. PMID:27561506

  11. A Paclitaxel-Loaded Recombinant Polypeptide Nanoparticle Outperforms Abraxane in Multiple Murine Cancer Models

    PubMed Central

    Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh

    2015-01-01

    Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumor specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60-nm diameter near-monodisperse nanoparticles that increased the systemic exposure of PTX by 7-fold compared to free drug and 2-fold compared to the FDA approved taxane nanoformulation (Abraxane®). The tumor uptake of the CP-PTX nanoparticle was 5-fold greater than free drug and 2-fold greater than Abraxane. In a murine cancer model of human triple negative breast cancer and prostate cancer, CP-PTX induced near complete tumor regression after a single dose in both tumor models, whereas at the same dose, no mice treated with Abraxane survived for more than 80 days (breast) and 60 days (prostate) respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for paclitaxel delivery. PMID:26239362

  12. Gender differences in primary and secondary education: Are girls really outperforming boys?

    NASA Astrophysics Data System (ADS)

    Driessen, Geert; van Langen, Annemarie

    2013-06-01

    A moral panic has broken out in several countries after recent studies showed that girls were outperforming boys in education. Commissioned by the Dutch Ministry of Education, the present study examines the position of boys and girls in Dutch primary education and in the first phase of secondary education over the past ten to fifteen years. On the basis of several national and international large-scale databases, the authors examined whether one can indeed speak of a gender gap, at the expense of boys. Three domains were investigated, namely cognitive competencies, non-cognitive competencies, and school career features. The results as expressed in effect sizes show that there are hardly any differences with regard to language and mathematics proficiency. However, the position of boys in terms of educational level and attitudes and behaviour is much more unfavourable than that of girls. Girls, on the other hand, score more unfavourably with regard to sector and subject choice. While the present situation in general does not differ very much from that of a decade ago, it is difficult to predict in what way the balances might shift in the years to come.

  13. Plants adapted to warmer climate do not outperform regional plants during a natural heat wave.

    PubMed

    Bucharova, Anna; Durka, Walter; Hermann, Julia-Maria; Hölzel, Norbert; Michalski, Stefan; Kollmann, Johannes; Bossdorf, Oliver

    2016-06-01

    With ongoing climate change, many plant species may not be able to adapt rapidly enough, and some conservation experts are therefore considering to translocate warm-adapted ecotypes to mitigate effects of climate warming. Although this strategy, called assisted migration, is intuitively plausible, most of the support comes from models, whereas experimental evidence is so far scarce. Here we present data on multiple ecotypes of six grassland species, which we grew in four common gardens in Germany during a natural heat wave, with temperatures 1.4-2.0°C higher than the long-term means. In each garden we compared the performance of regional ecotypes with plants from a locality with long-term summer temperatures similar to what the plants experienced during the summer heat wave. We found no difference in performance between regional and warm-adapted plants in four of the six species. In two species, regional ecotypes even outperformed warm-adapted plants, despite elevated temperatures, which suggests that translocating warm-adapted ecotypes may not only lack the desired effect of increased performance but may even have negative consequences. Even if adaptation to climate plays a role, other factors involved in local adaptation, such as biotic interactions, may override it. Based on our results, we cannot advocate assisted migration as a universal tool to enhance the performance of local plant populations and communities during climate change. PMID:27516871

  14. Layered carbon nanotube-polyelectrolyte electrodes outperform traditional neural interface materials.

    PubMed

    Jan, Edward; Hendricks, Jeffrey L; Husaini, Vincent; Richardson-Burns, Sarah M; Sereno, Andrew; Martin, David C; Kotov, Nicholas A

    2009-12-01

    The safety, function, and longevity of implantable neuroprosthetic and cardiostimulating electrodes depend heavily on the electrical properties of the electrode-tissue interface, which in many cases requires substantial improvement. While different variations of carbon nanotube materials have been shown to be suitable for neural excitation, it is critical to evaluate them versus other materials used for bioelectrical interfacing, which have not been done in any study performed so far despite strong interest to this area. In this study, we carried out this evaluation and found that composite multiwalled carbon nanotube-polyelectrolyte (MWNT-PE) multilayer electrodes substantially outperform in one way or the other state-of-the-art neural interface materials available today, namely activated electrochemically deposited iridium oxide (IrOx) and poly(3,4-ethylenedioxythiophene) (PEDOT). Our findings provide the concrete experimental proof to the much discussed possibility that carbon nanotube composites can serve as excellent new material for neural interfacing with a strong possibility to lead to a new generation of implantable electrodes. PMID:19785391

  15. Collective intelligence meets medical decision-making: the collective outperforms the best radiologist.

    PubMed

    Wolf, Max; Krause, Jens; Carney, Patricia A; Bogart, Andy; Kurvers, Ralf H J M

    2015-01-01

    While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules ("majority", "quorum", and "weighted quorum") when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331

  16. A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation

    PubMed Central

    Christiner, Markus; Reiterer, Susanne Maria

    2015-01-01

    Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. Conclusion: First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes. PMID:26379537

  17. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  18. Dynamic classification using case-specific training cohorts outperforms static gene expression signatures in breast cancer

    PubMed Central

    Győrffy, Balázs; Karn, Thomas; Sztupinszki, Zsófia; Weltz, Boglárka; Müller, Volkmar; Pusztai, Lajos

    2015-01-01

    The molecular diversity of breast cancer makes it impossible to identify prognostic markers that are applicable to all breast cancers. To overcome limitations of previous multigene prognostic classifiers, we propose a new dynamic predictor: instead of using a single universal training cohort and an identical list of informative genes to predict the prognosis of new cases, a case-specific predictor is developed for each test case. Gene expression data from 3,534 breast cancers with clinical annotation including relapse-free survival is analyzed. For each test case, we select a case-specific training subset including only molecularly similar cases and a case-specific predictor is generated. This method yields different training sets and different predictors for each new patient. The model performance was assessed in leave-one-out validation and also in 325 independent cases. Prognostic discrimination was high for all cases (n = 3,534, HR = 3.68, p = 1.67 E−56). The dynamic predictor showed higher overall accuracy (0.68) than genomic surrogates for Oncotype DX (0.64), Genomic Grade Index (0.61) or MammaPrint (0.47). The dynamic predictor was also effective in triple-negative cancers (n = 427, HR = 3.08, p = 0.0093) where the above classifiers all failed. Validation in independent patients yielded similar classification power (HR = 3.57). The dynamic classifier is available online at http://www.recurrenceonline.com/?q=Re_training. In summary, we developed a new method to make personalized prognostic prediction using case-specific training cohorts. The dynamic predictors outperform static models developed from single historical training cohorts and they also predict well in triple-negative cancers. PMID:25274406

  19. Do Cultivated Varieties of Native Plants Have the Ability to Outperform Their Wild Relatives?

    PubMed Central

    Schröder, Roland; Prasse, Rüdiger

    2013-01-01

    Vast amounts of cultivars of native plants are annually introduced into the semi-natural range of their wild relatives for re-vegetation and restoration. As cultivars are often selected towards enhanced biomass production and might transfer these traits into wild relatives by hybridization, it is suggested that cultivars and the wild × cultivar hybrids are competitively superior to their wild relatives. The release of such varieties may therefore result in unintended changes in native vegetation. In this study we examined for two species frequently used in re-vegetation (Plantago lanceolata and Lotus corniculatus) whether cultivars and artificially generated intra-specific wild × cultivar hybrids may produce a higher vegetative and generative biomass than their wilds. For that purpose a competition experiment was conducted for two growing seasons in a common garden. Every plant type was growing (a.) alone, (b.) in pairwise combination with a similar plant type and (c.) in pairwise interaction with a different plant type. When competing with wilds cultivars of both species showed larger biomass production than their wilds in the first year only and hybrids showed larger biomass production than their wild relatives in both study years. As biomass production is an important factor determining fitness and competitive ability, we conclude that cultivars and hybrids are competitively superior their wild relatives. However, cultivars of both species experienced large fitness reductions (nearly complete mortality in L. corniculatus) due to local climatic conditions. We conclude that cultivars are good competitors only as long as they are not subjected to stressful environmental factors. As hybrids seemed to inherit both the ability to cope with the local climatic conditions from their wild parents as well as the enhanced competitive strength from their cultivars, we regard them as strong competitors and assume that they are able to outperform their wilds at least over

  20. Performance appraisal of estimation algorithms and application of estimation algorithms to target tracking

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanlue

    This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or linear minimum mean square error (LIB-M-ISE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by my advisor Dr. X. Rong Li. Based on the so-called quasi-recursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.

  1. Does Cognitive Behavioral Therapy for Youth Anxiety Outperform Usual Care in Community Clinics? An Initial Effectiveness Test

    ERIC Educational Resources Information Center

    Southam-Gerow, Michael A.; Weisz, John R.; Chu, Brian C.; McLeod, Bryce D.; Gordis, Elana B.; Connor-Smith, Jennifer K.

    2010-01-01

    Objective: Most tests of cognitive behavioral therapy (CBT) for youth anxiety disorders have shown beneficial effects, but these have been efficacy trials with recruited youths treated by researcher-employed therapists. One previous (nonrandomized) trial in community clinics found that CBT did not outperform usual care (UC). The present study used…

  2. After Two Years, Three Elementary Math Curricula Outperform a Fourth. NCEE Technical Appendix. NCEE 2013-4019

    ERIC Educational Resources Information Center

    Agodini, Roberto; Harris, Barbara; Remillard, Janine; Thomas, Melissa

    2013-01-01

    This appendix provides the details that underlie the analyses reported in the evaluation brief, "After Two Years, Three Elementary Math Curricula Outperform a Fourth." The details are organized in six sections: Study Curricula and Design (Section A), Data Collection (Section B), Construction of the Analysis File (Section C), Curriculum Effects on…

  3. Neighborhood inverse consistency preprocessing

    SciTech Connect

    Freuder, E.C.; Elfe, C.D.

    1996-12-31

    Constraint satisfaction consistency preprocessing methods are used to reduce search effort. Time and especially space costs limit the amount of preprocessing that will be cost effective. A new form of consistency preprocessing, neighborhood inverse consistency, can achieve more problem pruning than the usual arc consistency preprocessing in a cost effective manner. There are two basic ideas: (1) Common forms of consistency enforcement basically operate by identifying and remembering solutions to subproblems for which a consistent value cannot be found for some additional problem variable. The space required for this memory can quickly become prohibitive. Inverse consistency basically operates by removing values for variables that are not consistent with any solution to some subproblem involving additional variables. The space requirement is at worst linear. (2) Typically consistency preprocessing achieves some level of consistency uniformly throughout the problem. A subproblem solution will be tested against each additional variable that constrains any subproblem variable. Neighborhood consistency focuses attention on the subproblem formed by the variables that are all constrained by the value in question. By targeting highly relevant subproblems we hope to {open_quotes}skim the cream{close_quotes}, obtaining a high payoff for a limited cost.

  4. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  5. Computations and algorithms in physical and biological problems

    NASA Astrophysics Data System (ADS)

    Qin, Yu

    This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.

  6. Improved DTI registration allows voxel-based analysis that outperforms tract-based spatial statistics.

    PubMed

    Schwarz, Christopher G; Reid, Robert I; Gunter, Jeffrey L; Senjem, Matthew L; Przybelski, Scott A; Zuk, Samantha M; Whitwell, Jennifer L; Vemuri, Prashanthi; Josephs, Keith A; Kantarci, Kejal; Thompson, Paul M; Petersen, Ronald C; Jack, Clifford R

    2014-07-01

    Tract-Based Spatial Statistics (TBSS) is a popular software pipeline to coregister sets of diffusion tensor Fractional Anisotropy (FA) images for performing voxel-wise comparisons. It is primarily defined by its skeleton projection step intended to reduce effects of local misregistration. A white matter "skeleton" is computed by morphological thinning of the inter-subject mean FA, and then all voxels are projected to the nearest location on this skeleton. Here we investigate several enhancements to the TBSS pipeline based on recent advances in registration for other modalities, principally based on groupwise registration with the ANTS-SyN algorithm. We validate these enhancements using simulation experiments with synthetically-modified images. When used with these enhancements, we discover that TBSS's skeleton projection step actually reduces algorithm accuracy, as the improved registration leaves fewer errors to warrant correction, and the effects of this projection's compromises become stronger than those of its benefits. In our experiments, our proposed pipeline without skeleton projection is more sensitive for detecting true changes and has greater specificity in resisting false positives from misregistration. We also present comparative results of the proposed and traditional methods, both with and without the skeleton projection step, on three real-life datasets: two comparing differing populations of Alzheimer's disease patients to matched controls, and one comparing progressive supranuclear palsy patients to matched controls. The proposed pipeline produces more plausible results according to each disease's pathophysiology. PMID:24650605

  7. Indexing Consistency and Quality.

    ERIC Educational Resources Information Center

    Zunde, Pranas; Dexter, Margaret E.

    A measure of indexing consistency is developed based on the concept of 'fuzzy sets'. It assigns a higher consistency value if indexers agree on the more important terms than if they agree on less important terms. Measures of the quality of an indexer's work and exhaustivity of indexing are also proposed. Experimental data on indexing consistency…

  8. Epipolar Consistency in Transmission Imaging.

    PubMed

    Aichert, André; Berger, Martin; Wang, Jian; Maass, Nicole; Doerfler, Arnd; Hornegger, Joachim; Maier, Andreas K

    2015-11-01

    This paper presents the derivation of the Epipolar Consistency Conditions (ECC) between two X-ray images from the Beer-Lambert law of X-ray attenuation and the Epipolar Geometry of two pinhole cameras, using Grangeat's theorem. We motivate the use of Oriented Projective Geometry to express redundant line integrals in projection images and define a consistency metric, which can be used, for instance, to estimate patient motion directly from a set of X-ray images. We describe in detail the mathematical tools to implement an algorithm to compute the Epipolar Consistency Metric and investigate its properties with detailed random studies on both artificial and real FD-CT data. A set of six reference projections of the CT scan of a fish were used to evaluate accuracy and precision of compensating for random disturbances of the ground truth projection matrix using an optimization of the consistency metric. In addition, we use three X-ray images of a pumpkin to prove applicability to real data. We conclude, that the metric might have potential in applications related to the estimation of projection geometry. By expression of redundancy between two arbitrary projection views, we in fact support any device or acquisition trajectory which uses a cone-beam geometry. We discuss certain geometric situations, where the ECC provide the ability to correct 3D motion, without the need for 3D reconstruction. PMID:25915956

  9. Outperforming whom? A multilevel study of performance-prove goal orientation, performance, and the moderating role of shared team identification.

    PubMed

    Dietz, Bart; van Knippenberg, Daan; Hirst, Giles; Restubog, Simon Lloyd D

    2015-11-01

    Performance-prove goal orientation affects performance because it drives people to try to outperform others. A proper understanding of the performance-motivating potential of performance-prove goal orientation requires, however, that we consider the question of whom people desire to outperform. In a multilevel analysis of this issue, we propose that the shared team identification of a team plays an important moderating role here, directing the performance-motivating influence of performance-prove goal orientation to either the team level or the individual level of performance. A multilevel study of salespeople nested in teams supports this proposition, showing that performance-prove goal orientation motivates team performance more with higher shared team identification, whereas performance-prove goal orientation motivates individual performance more with lower shared team identification. Establishing the robustness of these findings, a second study replicates them with individual and team performance in an educational context. PMID:26011723

  10. Consistent interactions and involution

    NASA Astrophysics Data System (ADS)

    Kaparulin, D. S.; Lyakhovich, S. L.; Sharapov, A. A.

    2013-01-01

    Starting from the concept of involution of field equations, a universal method is proposed for constructing consistent interactions between the fields. The method equally well applies to the Lagrangian and non-Lagrangian equations and it is explicitly covariant. No auxiliary fields are introduced. The equations may have (or have no) gauge symmetry and/or second class constraints in Hamiltonian formalism, providing the theory admits a Hamiltonian description. In every case the method identifies all the consistent interactions.

  11. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoqian; Guo, Qinghua; Su, Yanjun; Xue, Baolin

    2016-07-01

    Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

  12. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    NASA Astrophysics Data System (ADS)

    Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-04-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.

  13. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response.

    PubMed

    Maiti, A; Small, W; Lewicki, J P; Weisgraber, T H; Duoss, E B; Chinn, S C; Pearson, M A; Spadaccini, C M; Maxwell, R S; Wilson, T S

    2016-01-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter's improved long-term stability and mechanical performance. PMID:27117858

  14. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    DOE PAGESBeta

    Maiti, A.; Small, W.; Lewicki, J.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-04-27

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curvesmore » predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. As a result, this indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.« less

  15. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    PubMed Central

    Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-01-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance. PMID:27117858

  16. Network Consistent Data Association.

    PubMed

    Chakraborty, Anirban; Das, Abir; Roy-Chowdhury, Amit K

    2016-09-01

    Existing data association techniques mostly focus on matching pairs of data-point sets and then repeating this process along space-time to achieve long term correspondences. However, in many problems such as person re-identification, a set of data-points may be observed at multiple spatio-temporal locations and/or by multiple agents in a network and simply combining the local pairwise association results between sets of data-points often leads to inconsistencies over the global space-time horizons. In this paper, we propose a Novel Network Consistent Data Association (NCDA) framework formulated as an optimization problem that not only maintains consistency in association results across the network, but also improves the pairwise data association accuracies. The proposed NCDA can be solved as a binary integer program leading to a globally optimal solution and is capable of handling the challenging data-association scenario where the number of data-points varies across different sets of instances in the network. We also present an online implementation of NCDA method that can dynamically associate new observations to already observed data-points in an iterative fashion, while maintaining network consistency. We have tested both the batch and the online NCDA in two application areas-person re-identification and spatio-temporal cell tracking and observed consistent and highly accurate data association results in all the cases. PMID:26485472

  17. CRISPR knockout screening outperforms shRNA and CRISPRi in identifying essential genes.

    PubMed

    Evers, Bastiaan; Jastrzebski, Katarzyna; Heijmans, Jeroen P M; Grernrum, Wipawadee; Beijersbergen, Roderick L; Bernards, Rene

    2016-06-01

    High-throughput genetic screens have become essential tools for studying a wide variety of biological processes. Here we experimentally compare systems based on clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) or its transcriptionally repressive variant, CRISPR-interference (CRISPRi), with a traditional short hairpin RNA (shRNA)-based system for performing lethality screens. We find that the CRISPR technology performed best, with low noise, minimal off-target effects and consistent activity across reagents. PMID:27111720

  18. Learning deterministic finite automata with a smart state labeling evolutionary algorithm.

    PubMed

    Lucas, Simon M; Reynolds, T Jeff

    2005-07-01

    Learning a Deterministic Finite Automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the Evidence Driven State Merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also studythe effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions. PMID:16013754

  19. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  20. Unfamiliar face matching: Pairs out-perform individuals and provide a route to training.

    PubMed

    Dowsett, Andrew J; Burton, A Mike

    2015-08-01

    Matching unfamiliar faces is known to be difficult. Here, we ask whether performance can be improved by asking viewers to work in pairs, a manipulation known to increase accuracy for low-level visual discrimination tasks. Across four experiments we consistently find that face matching accuracy is higher for pairs of viewers than for individuals. This 'pairs advantage' is generally driven by adopting the response of the higher scoring partner. However, when the task becomes difficult, both partners' performance is improved by working in a pair. In two experiments, we find evidence that working in a pair can lead to subsequent improvements in individual performance, specifically for viewers whose accuracy is initially low. The pairs' technique therefore offers the opportunity for substantial improvements in face matching performance, along with an added training benefit. PMID:25393594

  1. When is holography consistent?

    NASA Astrophysics Data System (ADS)

    McInnes, Brett; Ong, Yen Chin

    2015-09-01

    Holographic duality relates two radically different kinds of theory: one with gravity, one without. The very existence of such an equivalence imposes strong consistency conditions which are, in the nature of the case, hard to satisfy. Recently a particularly deep condition of this kind, relating the minimum of a probe brane action to a gravitational bulk action (in a Euclidean formulation), has been recognized; and the question arises as to the circumstances under which it, and its Lorentzian counterpart, is satisfied. We discuss the fact that there are physically interesting situations in which one or both versions might, in principle, not be satisfied. These arise in two distinct circumstances: first, when the bulk is not an Einstein manifold and, second, in the presence of angular momentum. Focusing on the application of holography to the quark-gluon plasma (of the various forms arising in the early Universe and in heavy-ion collisions), we find that these potential violations never actually occur. This suggests that the consistency condition is a "law of physics" expressing a particular aspect of holography.

  2. Consistent Quantum Theory

    NASA Astrophysics Data System (ADS)

    Griffiths, Robert B.

    2001-11-01

    Quantum mechanics is one of the most fundamental yet difficult subjects in physics. Nonrelativistic quantum theory is presented here in a clear and systematic fashion, integrating Born's probabilistic interpretation with Schrödinger dynamics. Basic quantum principles are illustrated with simple examples requiring no mathematics beyond linear algebra and elementary probability theory. The quantum measurement process is consistently analyzed using fundamental quantum principles without referring to measurement. These same principles are used to resolve several of the paradoxes that have long perplexed physicists, including the double slit and Schrödinger's cat. The consistent histories formalism used here was first introduced by the author, and extended by M. Gell-Mann, J. Hartle and R. Omnès. Essential for researchers yet accessible to advanced undergraduate students in physics, chemistry, mathematics, and computer science, this book is supplementary to standard textbooks. It will also be of interest to physicists and philosophers working on the foundations of quantum mechanics. Comprehensive account Written by one of the main figures in the field Paperback edition of successful work on philosophy of quantum mechanics

  3. Amphipols Outperform Dodecylmaltoside Micelles in Stabilizing Membrane Protein Structure in the Gas Phase

    PubMed Central

    2014-01-01

    Noncovalent mass spectrometry (MS) is emerging as an invaluable technique to probe the structure, interactions, and dynamics of membrane proteins (MPs). However, maintaining native-like MP conformations in the gas phase using detergent solubilized proteins is often challenging and may limit structural analysis. Amphipols, such as the well characterized A8-35, are alternative reagents able to maintain the solubility of MPs in detergent-free solution. In this work, the ability of A8-35 to retain the structural integrity of MPs for interrogation by electrospray ionization-ion mobility spectrometry-mass spectrometry (ESI-IMS-MS) is compared systematically with the commonly used detergent dodecylmaltoside. MPs from the two major structural classes were selected for analysis, including two β-barrel outer MPs, PagP and OmpT (20.2 and 33.5 kDa, respectively), and two α-helical proteins, Mhp1 and GalP (54.6 and 51.7 kDa, respectively). Evaluation of the rotationally averaged collision cross sections of the observed ions revealed that the native structures of detergent solubilized MPs were not always retained in the gas phase, with both collapsed and unfolded species being detected. In contrast, ESI-IMS-MS analysis of the amphipol solubilized MPs studied resulted in charge state distributions consistent with less gas phase induced unfolding, and the presence of lowly charged ions which exhibit collision cross sections comparable with those calculated from high resolution structural data. The data demonstrate that A8-35 can be more effective than dodecylmaltoside at maintaining native MP structure and interactions in the gas phase, permitting noncovalent ESI-IMS-MS analysis of MPs from the two major structural classes, while gas phase dissociation from dodecylmaltoside micelles leads to significant gas phase unfolding, especially for the α-helical MPs studied. PMID:25495802

  4. Consistent quantum measurements

    NASA Astrophysics Data System (ADS)

    Griffiths, Robert B.

    2015-11-01

    In response to recent criticisms by Okon and Sudarsky, various aspects of the consistent histories (CH) resolution of the quantum measurement problem(s) are discussed using a simple Stern-Gerlach device, and compared with the alternative approaches to the measurement problem provided by spontaneous localization (GRW), Bohmian mechanics, many worlds, and standard (textbook) quantum mechanics. Among these CH is unique in solving the second measurement problem: inferring from the measurement outcome a property of the measured system at a time before the measurement took place, as is done routinely by experimental physicists. The main respect in which CH differs from other quantum interpretations is in allowing multiple stochastic descriptions of a given measurement situation, from which one (or more) can be selected on the basis of its utility. This requires abandoning a principle (termed unicity), central to classical physics, that at any instant of time there is only a single correct description of the world.

  5. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  6. Serial Generalized Ensemble Simulations of Biomolecules with Self-Consistent Determination of Weights.

    PubMed

    Chelli, Riccardo; Signorini, Giorgio F

    2012-03-13

    Serial generalized ensemble simulations, such as simulated tempering, enhance phase space sampling through non-Boltzmann weighting protocols. The most critical aspect of these methods with respect to the popular replica exchange schemes is the difficulty in determining the weight factors which enter the criterion for accepting replica transitions between different ensembles. Recently, a method, called BAR-SGE, was proposed for estimating optimal weight factors by resorting to a self-consistent procedure applied during the simulation (J. Chem. Theory Comput.2010, 6, 1935-1950). Calculations on model systems have shown that BAR-SGE outperforms other approaches proposed for determining optimal weights in serial generalized ensemble simulations. However, extensive tests on real systems and on convergence features with respect to the replica exchange method are lacking. Here, we report on a thorough analysis of BAR-SGE by performing molecular dynamics simulations of a solvated alanine dipeptide, a system often used as a benchmark to test new computational methodologies, and comparing results to the replica exchange method. To this aim, we have supplemented the ORAC program, a FORTRAN suite for molecular dynamics simulations (J. Comput. Chem.2010, 31, 1106-1116), with several variants of the BAR-SGE technique. An illustration of the specific BAR-SGE algorithms implemented in the ORAC program is also provided. PMID:26593345

  7. A Novel Activated-Charcoal-Doped Multiwalled Carbon Nanotube Hybrid for Quasi-Solid-State Dye-Sensitized Solar Cell Outperforming Pt Electrode.

    PubMed

    Arbab, Alvira Ayoub; Sun, Kyung Chul; Sahito, Iftikhar Ali; Qadir, Muhammad Bilal; Choi, Yun Seon; Jeong, Sung Hoon

    2016-03-23

    Highly conductive mesoporous carbon structures based on multiwalled carbon nanotubes (MWCNTs) and activated charcoal (AC) were synthesized by an enzymatic dispersion method. The synthesized carbon configuration consists of synchronized structures of highly conductive MWCNT and porous activated charcoal morphology. The proposed carbon structure was used as counter electrode (CE) for quasi-solid-state dye-sensitized solar cells (DSSCs). The AC-doped MWCNT hybrid showed much enhanced electrocatalytic activity (ECA) toward polymer gel electrolyte and revealed a charge transfer resistance (RCT) of 0.60 Ω, demonstrating a fast electron transport mechanism. The exceptional electrocatalytic activity and high conductivity of the AC-doped MWCNT hybrid CE are associated with its synchronized features of high surface area and electronic conductivity, which produces higher interfacial reaction with the quasi-solid electrolyte. Morphological studies confirm the forms of amorphous and conductive 3D carbon structure with high density of CNT colloid. The excessive oxygen surface groups and defect-rich structure can entrap an excessive volume of quasi-solid electrolyte and locate multiple sites for iodide/triiodide catalytic reaction. The resultant D719 DSSC composed of this novel hybrid CE fabricated with polymer gel electrolyte demonstrated an efficiency of 10.05% with a high fill factor (83%), outperforming the Pt electrode. Such facile synthesis of CE together with low cost and sustainability supports the proposed DSSCs' structure to stand out as an efficient next-generation photovoltaic device. PMID:26911208

  8. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    PubMed

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266

  9. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data

    PubMed Central

    Puttick, Mark N.; Parry, Luke; Tanner, Alastair R.; Tarver, James E.; Fleming, James

    2016-01-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266

  10. How resilient are resilience scales? The Big Five scales outperform resilience scales in predicting adjustment in adolescents.

    PubMed

    Waaktaar, Trine; Torgersen, Svenn

    2010-04-01

    This study's aim was to determine whether resilience scales could predict adjustment over and above that predicted by the five-factor model (FFM). A sample of 1,345 adolescents completed paper-and-pencil scales on FFM personality (Hierarchical Personality Inventory for Children), resilience (Ego-Resiliency Scale [ER89] by Block & Kremen, the Resilience Scale [RS] by Wagnild & Young) and adaptive behaviors (California Healthy Kids Survey, UCLA Loneliness Scale and three measures of school adaptation). The results showed that the FFM scales accounted for the highest proportion of variance in disturbance. For adaptation, the resilience scales contributed as much as the FFM. In no case did the resilience scales outperform the FFM by increasing the explained variance. The results challenge the validity of the resilience concept as an indicator of human adaptation and avoidance of disturbance, although the concept may have heuristic value in combining favorable aspects of a person's personality endowment. PMID:19961558

  11. Physiological outperformance at the morphologically-transformed edge of the cyanobacteriosponge Terpios hoshinota (Suberitidae: Hadromerida) when confronting opponent corals.

    PubMed

    Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen

    2015-01-01

    Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge's growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525

  12. Physiological Outperformance at the Morphologically-Transformed Edge of the Cyanobacteriosponge Terpios hoshinota (Suberitidae: Hadromerida) when Confronting Opponent Corals

    PubMed Central

    Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen

    2015-01-01

    Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge’s growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525

  13. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  14. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  15. Why envy outperforms admiration.

    PubMed

    van de Ven, Niels; Zeelenberg, Marcel; Pieters, Rik

    2011-06-01

    Four studies tested the hypothesis that the emotion of benign envy, but not the emotions of admiration or malicious envy, motivates people to improve themselves. Studies 1 to 3 found that only benign envy was related to the motivation to study more (Study 1) and to actual performance on the Remote Associates Task (which measures intelligence and creativity; Studies 2 and 3). Study 4 found that an upward social comparison triggered benign envy and subsequent better performance only when people thought self-improvement was attainable. When participants thought self-improvement was hard, an upward social comparison led to more admiration and no motivation to do better. Implications of these findings for theories of social emotions such as envy, social comparisons, and for understanding the influence of role models are discussed. PMID:21383070

  16. MEDUSAHEAD OUTPERFORMS SQUIRRETAIL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Understanding the ecological processes fostering invasion and dominance by medusahead is central to its management. The objectives of this study were 1) to quantify and compare interference between medusahead and squirreltail under different concentrations of soil N and P and 2) to compare growth r...

  17. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  18. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  19. A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem

    NASA Astrophysics Data System (ADS)

    Jäger, Gerold; Zhang, Weixiong

    The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.

  20. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs. PMID:10199993

  1. A novel surface defect inspection algorithm for magnetic tile

    NASA Astrophysics Data System (ADS)

    Xie, Luofeng; Lin, Lijun; Yin, Ming; Meng, Lintao; Yin, Guofu

    2016-07-01

    In this paper, we propose a defect extraction method for magnetic tile images based on the shearlet transform. The shearlet transform is a method of multi-scale geometric analysis. Compared with similar methods, the shearlet transform offers higher directional sensitivity and this is useful to accurately extract geometric characteristics from data. In general, a magnetic tile image captured by CCD camera mainly consists of target area, background. Our strategy for extracting the surface defects of magnetic tile comprises two steps: image preprocessing and defect extraction. Both steps are critical. After preprocessing the image, we extract the target area. Due to the low contrast in the magnetic tile image, we apply the discrete shearlet transform to enhance the contrast between the defect area and the normal area. Next, we apply a threshold method to generate a binary image. To validate our algorithm, we compare our experimental results with Otsu method, the curvelet transform and the nonsubsampled contourlet transform. Results show that our algorithm outperforms the other methods considered and can very effectively extract defects.

  2. Consistent Data Distribution Over Optical Links

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1988-01-01

    Fiber optics combined with IDE's provide consistent data communication between fault-tolerant computers. Data-transmission-checking system designed to provide consistent and reliable data communications for fault-tolerant and highly reliable computers. New technique performs variant of algorithm for fault-tolerant computers and uses fiber optics and independent decision elements (IDE's) to require fewer processors and fewer transmissions of messages. Enables fault-tolerant computers operating at different levels of redundancy to communicate with each other over triply redundant bus. Level of redundancy limited only by maximum number of wavelengths active on bus.

  3. A systematic comparison of genome-scale clustering algorithms

    PubMed Central

    2012-01-01

    Background A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work on comparative clustering evaluation has focused on parametric methods. Graph theoretical methods are recent additions to the tool set for the global analysis and decomposition of microarray co-expression matrices that have not generally been included in earlier methodological comparisons. In the present study, a variety of parametric and graph theoretical clustering algorithms are compared using well-characterized transcriptomic data at a genome scale from Saccharomyces cerevisiae. Methods For each clustering method under study, a variety of parameters were tested. Jaccard similarity was used to measure each cluster's agreement with every GO and KEGG annotation set, and the highest Jaccard score was assigned to the cluster. Clusters were grouped into small, medium, and large bins, and the Jaccard score of the top five scoring clusters in each bin were averaged and reported as the best average top 5 (BAT5) score for the particular method. Results Clusters produced by each method were evaluated based upon the positive match to known pathways. This produces a readily interpretable ranking of the relative effectiveness of clustering on the genes. Methods were also tested to determine whether they were able to identify clusters consistent with those identified by other clustering methods. Conclusions Validation of clusters against known gene classifications demonstrate that for this data, graph-based techniques outperform conventional clustering approaches, suggesting that further

  4. Recent ATR and fusion algorithm improvements for multiband sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2009-05-01

    An improved automatic target recognition processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering, normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. The objects that are classified by the 3 distinct ATR strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization; this improvement entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction and resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear exponential Box-Cox expansion (consisting of raising data to a to-be-determined power) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Box-Cox feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Box-Cox feature LLRT fusion of the ATR processing strings outperforms baseline "summing" and single-stage Box-Cox feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  5. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  6. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  7. Efficient training algorithms for a class of shunting inhibitory convolutional neural networks.

    PubMed

    Tivive, Fok Hing Chi; Bouzerdoum, Abdesselam

    2005-05-01

    This article presents some efficient training algorithms, based on first-order, second-order, and conjugate gradient optimization methods, for a class of convolutional neural networks (CoNNs), known as shunting inhibitory convolution neural networks. Furthermore, a new hybrid method is proposed, which is derived from the principles of Quickprop, Rprop, SuperSAB, and least squares (LS). Experimental results show that the new hybrid method can perform as well as the Levenberg-Marquardt (LM) algorithm, but at a much lower computational cost and less memory storage. For comparison sake, the visual pattern recognition task of face/nonface discrimination is chosen as a classification problem to evaluate the performance of the training algorithms. Sixteen training algorithms are implemented for the three different variants of the proposed CoNN architecture: binary-, Toeplitz- and fully connected architectures. All implemented algorithms can train the three network architectures successfully, but their convergence speed vary markedly. In particular, the combination of LS with the new hybrid method and LS with the LM method achieve the best convergence rates in terms of number of training epochs. In addition, the classification accuracies of all three architectures are assessed using ten-fold cross validation. The results show that the binary- and Toeplitz-connected architectures outperform slightly the fully connected architecture: the lowest error rates across all training algorithms are 1.95% for Toeplitz-connected, 2.10% for the binary-connected, and 2.20% for the fully connected network. In general, the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) methods, the three variants of LM algorithm, and the new hybrid/LS method perform consistently well, achieving error rates of less than 3% averaged across all three architectures. PMID:15940985

  8. Maximal sum of metabolic exchange fluxes outperforms biomass yield as a predictor of growth rate of microorganisms.

    PubMed

    Zarecki, Raphy; Oberhardt, Matthew A; Yizhak, Keren; Wagner, Allon; Shtifman Segal, Ella; Freilich, Shiri; Henry, Christopher S; Gophna, Uri; Ruppin, Eytan

    2014-01-01

    Growth rate has long been considered one of the most valuable phenotypes that can be measured in cells. Aside from being highly accessible and informative in laboratory cultures, maximal growth rate is often a prime determinant of cellular fitness, and predicting phenotypes that underlie fitness is key to both understanding and manipulating life. Despite this, current methods for predicting microbial fitness typically focus on yields [e.g., predictions of biomass yield using GEnome-scale metabolic Models (GEMs)] or notably require many empirical kinetic constants or substrate uptake rates, which render these methods ineffective in cases where fitness derives most directly from growth rate. Here we present a new method for predicting cellular growth rate, termed SUMEX, which does not require any empirical variables apart from a metabolic network (i.e., a GEM) and the growth medium. SUMEX is calculated by maximizing the SUM of molar EXchange fluxes (hence SUMEX) in a genome-scale metabolic model. SUMEX successfully predicts relative microbial growth rates across species, environments, and genetic conditions, outperforming traditional cellular objectives (most notably, the convention assuming biomass maximization). The success of SUMEX suggests that the ability of a cell to catabolize substrates and produce a strong proton gradient enables fast cell growth. Easily applicable heuristics for predicting growth rate, such as what we demonstrate with SUMEX, may contribute to numerous medical and biotechnological goals, ranging from the engineering of faster-growing industrial strains, modeling of mixed ecological communities, and the inhibition of cancer growth. PMID:24866123

  9. Kidney Injury Molecule-1 Outperforms Traditional Biomarkers of Kidney Injury in Multi-site Preclinical Biomarker Qualification Studies

    PubMed Central

    Vaidya, Vishal S.; Ozer, Josef S.; Frank, Dieterle; Collings, Fitz B.; Ramirez, Victoria; Troth, Sean; Muniappa, Nagaraja; Thudium, Douglas; Gerhold, David; Holder, Daniel J.; Bobadilla, Norma A.; Marrer, Estelle; Perentes, Elias; Cordier, André; Vonderscher, Jacky; Maurer, Gérard; Goering, Peter L.; Sistare, Frank D.; Bonventre, Joseph V.

    2010-01-01

    Kidney toxicity accounts for a significant percentage of morbidity and drug candidate failure. Serum creatinine (SCr) and blood urea nitrogen (BUN) have been used to monitor kidney dysfunction for over a century but these markers are insensitive and non-specific. In multi-site preclinical rat toxicology studies the diagnostic performance of urinary kidney injury molecule-1 (Kim-1) was compared to traditional biomarkers as predictors of kidney tubular histopathologic changes, currently considered the “gold standard” of nephrotoxicity. In multiple models of kidney injury, urinary Kim-1 significantly outperformed SCr and BUN. The area under the receiver operating characteristic curve for Kim-1 was between 0.91 and 0.99 as compared to 0.79 to 0.9 for BUN and 0.73 to 0.85 for SCr. Thus urinary Kim-1 is the first injury biomarker of kidney toxicity qualified by the FDA and EMEA and is expected to significantly improve kidney safety monitoring. PMID:20458318

  10. Site-specific in situ growth of an interferon-polymer conjugate that outperforms PEGASYS in cancer therapy.

    PubMed

    Hu, Jin; Wang, Guilin; Zhao, Wenguo; Liu, Xinyu; Zhang, Libin; Gao, Weiping

    2016-07-01

    Conjugating poly(ethylene glycol) (PEG), PEGylation, to therapeutic proteins is widely used as a means to improve their pharmacokinetics and therapeutic potential. One prime example is PEGylated interferon-alpha (PEGASYS). However, PEGylation usually leads to a heterogeneous mixture of positional isomers with reduced bioactivity and low yield. Herein, we report site-specific in situ growth (SIG) of a PEG-like polymer, poly(oligo(ethylene glycol) methyl ether methacrylate) (POEGMA), from the C-terminus of interferon-alpha to form a site-specific (C-terminal) and stoichiometric (1:1) POEGMA conjugate of interferon-alpha in high yield. The POEGMA conjugate showed significantly improved pharmacokinetics, tumor accumulation and anticancer efficacy as compared to interferon-alpha. Notably, the POEGMA conjugate possessed a 7.2-fold higher in vitro antiproliferative bioactivity than PEGASYS. More importantly, in a murine cancer model, the POEGMA conjugate completely inhibited tumor growth and eradicated tumors of 75% mice without appreciable systemic toxicity, whereas at the same dose, no mice treated with PEGASYS survived for over 58 days. The outperformance of a site-specific POEGMA conjugate prepared by SIG over PEGASYS that is the current gold standard for interferon-alpha delivery suggests that SIG is of interest for the development of next-generation protein therapeutics. PMID:27152679

  11. Current composite-feature classification methods do not outperform simple single-genes classifiers in breast cancer prognosis

    PubMed Central

    Staiger, Christine; Cadot, Sidney; Györffy, Balázs; Wessels, Lodewyk F. A.; Klau, Gunnar W.

    2013-01-01

    Integrating gene expression data with secondary data such as pathway or protein-protein interaction data has been proposed as a promising approach for improved outcome prediction of cancer patients. Methods employing this approach usually aggregate the expression of genes into new composite features, while the secondary data guide this aggregation. Previous studies were limited to few data sets with a small number of patients. Moreover, each study used different data and evaluation procedures. This makes it difficult to objectively assess the gain in classification performance. Here we introduce the Amsterdam Classification Evaluation Suite (ACES). ACES is a Python package to objectively evaluate classification and feature-selection methods and contains methods for pooling and normalizing Affymetrix microarrays from different studies. It is simple to use and therefore facilitates the comparison of new approaches to best-in-class approaches. In addition to the methods described in our earlier study (Staiger et al., 2012), we have included two prominent prognostic gene signatures specific for breast cancer outcome, one more composite feature selection method and two network-based gene ranking methods. Employing the evaluation pipeline we show that current composite-feature classification methods do not outperform simple single-genes classifiers in predicting outcome in breast cancer. Furthermore, we find that also the stability of features across different data sets is not higher for composite features. Most stunningly, we observe that prediction performances are not affected when extracting features from randomized PPI networks. PMID:24391662

  12. Surface Consistent Finite Frequency Phase Corrections

    NASA Astrophysics Data System (ADS)

    Kimman, W. P.

    2016-04-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray-path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency, and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the non-linear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore doesn't require fine sampling even for broadband sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  13. Surface consistent finite frequency phase corrections

    NASA Astrophysics Data System (ADS)

    Kimman, W. P.

    2016-07-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  14. Consistent detection of global predicates

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Marzullo, Keith

    1991-01-01

    A fundamental problem in debugging and monitoring is detecting whether the state of a system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such detection, in general, ill-defined. Three algorithms are presented for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occurred in the system.

  15. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  16. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  17. Lianas always outperform tree seedlings regardless of soil nutrients: results from a long-term fertilization experiment.

    PubMed

    Pasquini, Sarah C; Wright, S Joseph; Santiago, Louis S

    2015-07-01

    always outperform trees, in terms of photosynthetic processes and under contrasting rates of resource supply of macronutrients, will allow lianas to increase in abundance if disturbance and tree turnover rates are increasing in Neotropical forests as has been suggested. PMID:26378309

  18. Invasive Acer negundo outperforms native species in non-limiting resource environments due to its higher phenotypic plasticity

    PubMed Central

    2011-01-01

    Background To identify the determinants of invasiveness, comparisons of traits of invasive and native species are commonly performed. Invasiveness is generally linked to higher values of reproductive, physiological and growth-related traits of the invasives relative to the natives in the introduced range. Phenotypic plasticity of these traits has also been cited to increase the success of invasive species but has been little studied in invasive tree species. In a greenhouse experiment, we compared ecophysiological traits between an invasive species to Europe, Acer negundo, and early- and late-successional co-occurring native species, under different light, nutrient availability and disturbance regimes. We also compared species of the same species groups in situ, in riparian forests. Results Under non-limiting resources, A. negundo seedlings showed higher growth rates than the native species. However, A. negundo displayed equivalent or lower photosynthetic capacities and nitrogen content per unit leaf area compared to the native species; these findings were observed both on the seedlings in the greenhouse experiment and on adult trees in situ. These physiological traits were mostly conservative along the different light, nutrient and disturbance environments. Overall, under non-limiting light and nutrient conditions, specific leaf area and total leaf area of A. negundo were substantially larger. The invasive species presented a higher plasticity in allocation to foliage and therefore in growth with increasing nutrient and light availability relative to the native species. Conclusions The higher level of plasticity of the invasive species in foliage allocation in response to light and nutrient availability induced a better growth in non-limiting resource environments. These results give us more elements on the invasiveness of A. negundo and suggest that such behaviour could explain the ability of A. negundo to outperform native tree species, contributes to its spread

  19. Managed Bumblebees Outperform Honeybees in Increasing Peach Fruit Set in China: Different Limiting Processes with Different Pollinators

    PubMed Central

    Williams, Paul H.; Vaissière, Bernard E.; Zhou, Zhiyong; Gai, Qinbao; Dong, Jie; An, Jiandong

    2015-01-01

    Peach Prunus persica (L.) Batsch is self-compatible and largely self-fertile, but under greenhouse conditions pollinators must be introduced to achieve good fruit set and quality. Because little work has been done to assess the effectiveness of different pollinators on peach trees under greenhouse conditions, we studied ‘Okubo’ peach in greenhouse tunnels near Beijing between 2012 and 2014. We measured pollen deposition, pollen-tube growth rates, ovary development, and initial fruit set after the flowers were visited by either of two managed pollinators: bumblebees, Bombus patagiatus Nylander, and honeybees, Apis mellifera L. The results show that B. patagiatus is more effective than A. mellifera as a pollinator of peach in greenhouses because of differences in two processes. First, B. patagiatus deposits more pollen grains on peach stigmas than A. mellifera, both during a single visit and during a whole day of open pollination. Second, there are differences in the fertilization performance of the pollen deposited. Half of the flowers visited by B. patagiatus are fertilized 9–11 days after bee visits, while for flowers visited by A. mellifera, half are fertilized 13–15 days after bee visits. Consequently, fruit development is also accelerated by bumblebees, showing that the different pollinators have not only different pollination efficiency, but also influence the subsequent time course of fertilization and fruit set. Flowers visited by B. patagiatus show faster ovary growth and ultimately these flowers produce more fruit. Our work shows that pollinators may influence fruit production beyond the amount of pollen delivered. We show that managed indigenous bumblebees significantly outperform introduced honeybees in increasing peach initial fruit set under greenhouse conditions. PMID:25799170

  20. Structured bilaminar co-culture outperforms stem cells and disc cells in a simulated degenerate disc environment

    PubMed Central

    Allon, Aliza A.; Butcher, Kristin; Schneider, Richard A.; Lotz, Jeffrey C.

    2011-01-01

    Study Design This study explores the use of bilaminar coculture pellets of mesenchymal stem cells (MSC) and Nucleus Pulposus cells (NPC) as a cell-based therapy for intervertebral disc regeneration. The pellets were tested under conditions that mimic the degenerative disc. Objective Our goal is to optimize our cell-based therapy in vitro under conditions representative of the eventual diseased tissue. Summary of Background Data Harnessing the potential of stem cells is an important strategy for regenerative medicine. Our approach seeks to direct the behavior of stem cells by mimicking embryonic processes underlying cartilage and intervertebral disc development. Prior experiments have shown that bilaminar co-culture can help differentiate MSC and substantially improve new matrix deposition. Methods We have designed a novel spherical bilaminar cell pellet (BCP) where MSC are enclosed in a shell of NPC. There were three groups: MSC, NPC, and BCP. The pellets were tested under three different culture conditions: in a bioreactor that provides pressure & hypoxia (mimicking normal disc conditions), with inflammatory cytokines (IL-1b and TNF-a), and a bioreactor with inflammation (mimicking painful disc conditions). Results When cultured in the bioreactor, the NPC pellets produced significantly more glycosaminoglycan (GAG)/cell than the other groups: 70-80% more than the BCP and MSC alone. When cultured in an inflammatory environment, the MSC and BCP groups produced 30-34% more GAG/cell than NPC (p<0.05). When the pellets were cultured in a bioreactor with inflammation, the BCP made 25% more GAG/cell than MSC and 57% more than NPC (p<0.05). Conclusion This study shows that BCP outperform controls in a simulated degenerated disc environment. Adapting inductive mechanisms from development to trigger differentiation and restore diseased tissue has many advantages. As opposed to strategies that require growth factor supplements or genetic manipulations, our method is self

  1. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  2. An Algorithm Combining for Objective Prediction with Subjective Forecast Information

    NASA Astrophysics Data System (ADS)

    Choi, JunTae; Kim, SooHyun

    2016-04-01

    As direct or post-processed output from numerical weather prediction (NWP) models has begun to show acceptable performance compared with the predictions of human forecasters, many national weather centers have become interested in automatic forecasting systems based on NWP products alone, without intervention from human forecasters. The Korea Meteorological Administration (KMA) is now developing an automatic forecasting system for dry variables. The forecasts are automatically generated from NWP predictions using a post processing model (MOS). However, MOS cannot always produce acceptable predictions, and sometimes its predictions are rejected by human forecasters. In such cases, a human forecaster should manually modify the prediction consistently at points surrounding their corrections, using some kind of smart tool to incorporate the forecaster's opinion. This study introduces an algorithm to revise MOS predictions by adding a forecaster's subjective forecast information at neighbouring points. A statistical relation between two forecast points - a neighbouring point and a dependent point - was derived for the difference between a MOS prediction and that of a human forecaster. If the MOS prediction at a neighbouring point is updated by a human forecaster, the value at a dependent point is modified using a statistical relationship based on linear regression, with parameters obtained from a one-year dataset of MOS predictions and official forecast data issued by KMA. The best sets of neighbouring points and dependent point are statistically selected. According to verification, the RMSE of temperature predictions produced by the new algorithm was slightly lower than that of the original MOS predictions, and close to the RMSE of subjective forecasts. For wind speed and relative humidity, the new algorithm outperformed human forecasters.

  3. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023

  4. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  5. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  6. Learning-based superresolution algorithm using quantized pattern and bimodal postprocessing for text images

    NASA Astrophysics Data System (ADS)

    Lee, Hui Jung; Choi, Dong-Yoon; Song, Byung Cheol

    2015-11-01

    This paper proposes a learning-based superresolution algorithm using text characteristics for text images. The proposed algorithm consists of a learning stage and an inference stage. In the learning stage, a sufficient number of low-resolution (LR) to high-resolution (HR) block pairs are first extracted from various LR-HR image pairs that are composed of texts. Then, we classify those block pairs into 512 clusters and, for each cluster, calculate the optimal two-dimensional (2-D) finite impulse response (FIR) filter to synthesize a high-quality HR block from an LR block and store the block-adaptive 2-D FIR filters in a dictionary with their associated index. In the inference stage, we find the best-matched candidate to each input LR block from the dictionary and synthesize the HR block using the optimal 2-D FIR filter. Finally, an HR image is produced via proper postprocessing. Experimental results show that the proposed algorithm provides superior visual quality to images from previous works and outperforms previous processes in terms of computational complexity.

  7. Single-Cell Tracking with PET using a Novel Trajectory Reconstruction Algorithm

    PubMed Central

    Lee, Keum Sil; Kim, Tae Jin

    2015-01-01

    Virtually all biomedical applications of positron emission tomography (PET) use images to represent the distribution of a radiotracer. However, PET is increasingly used in cell tracking applications, for which the “imaging” paradigm may not be optimal. Here we investigate an alternative approach, which consists in reconstructing the time-varying position of individual radiolabeled cells directly from PET measurements. As a proof of concept, we formulate a new algorithm for reconstructing the trajectory of one single moving cell directly from list-mode PET data. We model the trajectory as a 3D B-spline function of the temporal variable and use non-linear optimization to minimize the mean-square distance between the trajectory and the recorded list-mode coincidence events. Using Monte Carlo simulations (GATE), we show that this new algorithm can track a single source moving within a small-animal PET system with <3 mm accuracy provided that the activity of the cell [Bq] is greater than four times its velocity [mm/s]. The algorithm outperforms conventional ML-EM as well as the “minimum distance” method used for positron emission particle tracking (PEPT). The new method was also successfully validated using experimentally acquired PET data. In conclusion, we demonstrated the feasibility of a new method for tracking a single moving cell directly from PET list-mode data, at the whole-body level, for physiologically relevant activities and velocities. PMID:25423651

  8. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  9. Prediction Errors in Learning Drug Response from Gene Expression Data – Influence of Labeling, Sample Size, and Machine Learning Algorithm

    PubMed Central

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment. PMID:23894636

  10. ProDomAs, protein domain assignment algorithm using center-based clustering and independent dominating set.

    PubMed

    Ansari, Elnaz Saberi; Eslahchi, Changiz; Pezeshk, Hamid; Sadeghi, Mehdi

    2014-09-01

    Decomposition of structural domains is an essential task in classifying protein structures, predicting protein function, and many other proteomics problems. As the number of known protein structures in PDB grows exponentially, the need for accurate automatic domain decomposition methods becomes more essential. In this article, we introduce a bottom-up algorithm for assigning protein domains using a graph theoretical approach. This algorithm is based on a center-based clustering approach. For constructing initial clusters, members of an independent dominating set for the graph representation of a protein are considered as the centers. A distance matrix is then defined for these clusters. To obtain final domains, these clusters are merged using the compactness principle of domains and a method similar to the neighbor-joining algorithm considering some thresholds. The thresholds are computed using a training set consisting of 50 protein chains. The algorithm is implemented using C++ language and is named ProDomAs. To assess the performance of ProDomAs, its results are compared with seven automatic methods, against five publicly available benchmarks. The results show that ProDomAs outperforms other methods applied on the mentioned benchmarks. The performance of ProDomAs is also evaluated against 6342 chains obtained from ASTRAL SCOP 1.71. ProDomAs is freely available at http://www.bioinf.cs.ipm.ir/software/prodomas. PMID:24596179

  11. Interactive retinal vessel centreline extraction and boundary delineation using anisotropic fast marching and intensities consistency.

    PubMed

    Da Chen; Cohen, Laurent D

    2015-08-01

    In this paper, we propose a new interactive retinal vessels extraction method with anisotropic fast marching (AFM) based on the observation that one vessel may have the property of local intensities consistency. Our goal is to extract both the centrelines and boundaries between two given points. The proposed method consists of two stages: the first stage aims to finding the vessel centrelines using AFM and local intensities consistency roughly, while the second stage is to refine the centrelines from the previous stage using constrained Riemannian metric based AFM, and get the boundaries of the vessels simultaneously. Experiments show that results of our method outperform the classical minimal path method [1]. PMID:26737257

  12. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  13. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  14. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  15. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  16. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  17. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  18. Consistent integration of geo-information

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2014-12-01

    Probabilistically formulated inverse problems can be seen as an application of data integration. Two types of information are (almost) always available: 1) geophysical data, and 2) information about geology and geologically plausible structures. The inverse problem consists of integrating the information available from geophysical data and geological information. In recent years inversion algorithms have emerged that allow integration of such different information. However such methods only provides useful results if the geological and geophysical information provided are consistent. Using weakly informed prior models and/or sparse uncertain geophysical data typically no problems with consistency arise. However, as data coverage and quality increase and still more complex and detailed prior information can be quantified (using e.g multiple point based statistics) then the risk of problems with consistency increases. Inconsistency between two independent sources of information about the same subsurface model, means that either one or both sources of information must be wrong.We will demonstrate that using cross hole GPR tomographic data, that such consistency problems exist, and that they can dramatically affect inversion results. The problem is two folded: 1) One will typically underestimate the error associated with geophysical data, and 2) Multiple-point based prior models often provide such detailed a priori information that it will not be possible to find a priori acceptable models that lead to a data fit within measurement uncertainties. We demonstrate that if inversion is forced on inconsistent information, then the solution to the inverse problem may be earth models that neither fit the data within their uncertainty, nor represent realistic geologically features. In the worst case such models will show artefacts that appear well resolved, and that can have severe effect on subsequent flow modeling. We will demonstrate how such inconsistencies can be

  19. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings. PMID:26352452

  20. Consistency, Markedness and Language Change: On the Notion 'Consistent Language.'

    ERIC Educational Resources Information Center

    Smith, N. V.

    1981-01-01

    Explores markedness of languages and language change in relation to their roles in the consistency of language. Concludes typology provides no explanations in itself, but rather through data which need explanations and form a testing ground for linguistic theories. (Author/BK)

  1. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  2. Generalized arc consistency for global cardinality constraint

    SciTech Connect

    Regin, J.C.

    1996-12-31

    A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.

  3. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  4. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  5. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  6. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  7. Attitude Consistency Among American Youth.

    ERIC Educational Resources Information Center

    Mott, Frank L.; Mott, Susan H.

    Attitudes of youth (ages 14-21) toward fertility expectations and women's roles are examined for consistency (e.g., whether high career expectations are correlated with a desire for fewer children). Approximately 12,000 White, Black, and Hispanic youth rated their attitudes toward statements that a woman's place is in the home, employment of wives…

  8. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    SciTech Connect

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.

  9. Consistency-based ellipse detection method for complicated images

    NASA Astrophysics Data System (ADS)

    Zhang, Lijun; Huang, Xuexiang; Feng, Weichun; Liang, Shuli; Hu, Tianjian

    2016-05-01

    Accurate ellipse detection in complicated images is a challenging problem due to corruptions from image clutter, noise, or occlusion of other objects. To cope with this problem, an edge-following-based ellipse detection method is proposed which promotes the performances of the subprocesses based on consistency. The ellipse detector models edge connectivity by line segments and exploits inconsistent endpoints of the line segments to split the edge contours into smooth arcs. The smooth arcs are further refined with a novel arc refinement method which iteratively improves the consistency degree of the smooth arc. A two-phase arc integration method is developed to group disconnected elliptical arcs belonging to the same ellipse, and two constraints based on consistency are defined to increase the effectiveness and speed of the merging process. Finally, an efficient ellipse validation method is proposed to evaluate the saliency of the elliptic hypotheses. Detailed evaluation on synthetic images shows that our method outperforms other state-of-the-art ellipse detection methods in terms of effectiveness and speed. Additionally, we test our detector on three challenging real-world datasets. The F-measure score and execution time of results demonstrate that our method is effective and fast in complicated images. Therefore, the proposed method is suitable for practical applications.

  10. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  11. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  12. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  13. Consistent interpretations of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Omnès, Roland

    1992-04-01

    Within the last decade, significant progress has been made towards a consistent and complete reformulation of the Copenhagen interpretation (an interpretation consisting in a formulation of the experimental aspects of physics in terms of the basic formalism; it is consistent if free from internal contradiction and complete if it provides precise predictions for all experiments). The main steps involved decoherence (the transition from linear superpositions of macroscopic states to a mixing), Griffiths histories describing the evolution of quantum properties, a convenient logical structure for dealing with histories, and also some progress in semiclassical physics, which was made possible by new methods. The main outcome is a theory of phenomena, viz., the classically meaningful properties of a macroscopic system. It shows in particular how and when determinism is valid. This theory can be used to give a deductive form to measurement theory, which now covers some cases that were initially devised as counterexamples against the Copenhagen interpretation. These theories are described, together with their applications to some key experiments and some of their consequences concerning epistemology.

  14. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  15. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    NASA Astrophysics Data System (ADS)

    Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.

  16. A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies

    PubMed Central

    2012-01-01

    Background Identification of causal SNPs in most genome wide association studies relies on approaches that consider each SNP individually. However, there is a strong correlation structure among SNPs that needs to be taken into account. Hence, increasingly modern computationally expensive regression methods are employed for SNP selection that consider all markers simultaneously and thus incorporate dependencies among SNPs. Results We develop a novel multivariate algorithm for large scale SNP selection using CAR score regression, a promising new approach for prioritizing biomarkers. Specifically, we propose a computationally efficient procedure for shrinkage estimation of CAR scores from high-dimensional data. Subsequently, we conduct a comprehensive comparison study including five advanced regression approaches (boosting, lasso, NEG, MCP, and CAR score) and a univariate approach (marginal correlation) to determine the effectiveness in finding true causal SNPs. Conclusions Simultaneous SNP selection is a challenging task. We demonstrate that our CAR score-based algorithm consistently outperforms all competing approaches, both uni- and multivariate, in terms of correctly recovered causal SNPs and SNP ranking. An R package implementing the approach as well as R code to reproduce the complete study presented here is available from http://strimmerlab.org/software/care/. PMID:23113980

  17. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.

    PubMed

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751

  18. On a vector space representation in genetic algorithms for sensor scheduling in wireless sensor networks.

    PubMed

    Martins, F V C; Carrano, E G; Wanner, E F; Takahashi, R H C; Mateus, G R; Nakamura, F G

    2014-01-01

    Recent works raised the hypothesis that the assignment of a geometry to the decision variable space of a combinatorial problem could be useful both for providing meaningful descriptions of the fitness landscape and for supporting the systematic construction of evolutionary operators (the geometric operators) that make a consistent usage of the space geometric properties in the search for problem optima. This paper introduces some new geometric operators that constitute the realization of searches along the combinatorial space versions of the geometric entities descent directions and subspaces. The new geometric operators are stated in the specific context of the wireless sensor network dynamic coverage and connectivity problem (WSN-DCCP). A genetic algorithm (GA) is developed for the WSN-DCCP using the proposed operators, being compared with a formulation based on integer linear programming (ILP) which is solved with exact methods. That ILP formulation adopts a proxy objective function based on the minimization of energy consumption in the network, in order to approximate the objective of network lifetime maximization, and a greedy approach for dealing with the system's dynamics. To the authors' knowledge, the proposed GA is the first algorithm to outperform the lifetime of networks as synthesized by the ILP formulation, also running in much smaller computational times for large instances. PMID:24102647

  19. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  20. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  1. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles

    PubMed Central

    Crawford, Broderick; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n2 × n2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751

  2. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  3. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  4. On the consistency of MPS

    NASA Astrophysics Data System (ADS)

    Souto-Iglesias, Antonio; Macià, Fabricio; González, Leo M.; Cercos-Pita, Jose L.

    2013-03-01

    The consistency of the Moving Particle Semi-implicit (MPS) method in reproducing the gradient, divergence and Laplacian differential operators is discussed in the present paper. Its relation to the Smoothed Particle Hydrodynamics (SPH) method is rigorously established. The application of the MPS method to solve the Navier-Stokes equations using a fractional step approach is treated, unveiling inconsistency problems when solving the Poisson equation for the pressure. A new corrected MPS method incorporating boundary terms is proposed. Applications to one dimensional boundary value Dirichlet and mixed Neumann-Dirichlet problems and to two-dimensional free-surface flows are presented.

  5. Memory for Hand-Use Depends on Consistency of Handedness

    PubMed Central

    Edlin, James M.; Carris, Emily K.; Lyle, Keith B.

    2013-01-01

    Individuals who do not consistently use the same hand to perform unimanual tasks (inconsistent-handed) outperform consistent right- and left-handed individuals on tests of episodic memory. We explored whether the inconsistent-hander (ICH) memory advantage extends to memory for unimanual hand use itself. Are ICHs better able to remember which hand they used to perform actions? Opposing predictions are possible, stemming from the finding that some regions of the corpus callosum are larger in ICHs, especially those that connect motor areas. One hypothesis is that greater callosally mediated interhemispheric interaction produces ICHs’ superior retrieval of episodic memories, and this may extend to episodic memories for hand use. Alternatively, we also hypothesized that greater interhemispheric interaction could produce more bilateral activation in motor areas during the performance and retrieval of unimanual actions. This could interfere with ICHs’ ability to remember which hand they used. To test these competing predictions in the current study, consistent- and inconsistent-handers performed unimanual actions, half of which required manipulating objects and half of which did not. Each action was performed four times in one of five conditions that differed in the ratio of left to right hand use: always left (4:0), usually left (3:1), equal (2:2), usually right (1:3), or always right (0:4). We compared consistent- and inconsistent-handers on recall of the left:right ratio for each action. ICHs remembered how they performed actions better than consistent-handers, regardless of ratio. These findings provide another example of superior episodic retrieval in ICHs. We discuss how greater interaction might benefit memory for hand use. PMID:24027522

  6. MARGA: multispectral adaptive region growing algorithm for brain extraction on axial MRI.

    PubMed

    Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Vilanova, Joan C; Rovira, Alex; Ramió-Torrentà, Lluís; Lladó, Xavier

    2014-02-01

    Brain extraction, also known as skull stripping, is one of the most important preprocessing steps for many automatic brain image analysis. In this paper we present a new approach called Multispectral Adaptive Region Growing Algorithm (MARGA) to perform the skull stripping process. MARGA is based on a region growing (RG) algorithm which uses the complementary information provided by conventional magnetic resonance images (MRI) such as T1-weighted and T2-weighted to perform the brain segmentation. MARGA can be seen as an extension of the skull stripping method proposed by Park and Lee (2009) [1], enabling their use in both axial views and low quality images. Following the same idea, we first obtain seed regions that are then spread using a 2D RG algorithm which behaves differently in specific zones of the brain. This adaptation allows to deal with the fact that middle MRI slices have better image contrast between the brain and non-brain regions than superior and inferior brain slices where the contrast is smaller. MARGA is validated using three different databases: 10 simulated brains from the BrainWeb database; 2 data sets from the National Alliance for Medical Image Computing (NAMIC) database, the first one consisting in 10 normal brains and 10 brains of schizophrenic patients acquired with a 3T GE scanner, and the second one consisting in 5 brains from lupus patients acquired with a 3T Siemens scanner; and 10 brains of multiple sclerosis patients acquired with a 1.5T scanner. We have qualitatively and quantitatively compared MARGA with the well-known Brain Extraction Tool (BET), Brain Surface Extractor (BSE) and Statistical Parametric Mapping (SPM) approaches. The obtained results demonstrate the validity of MARGA, outperforming the results of those standard techniques. PMID:24380649

  7. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  8. Maintaining consistency in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  9. Self-consistent klystron simulations

    SciTech Connect

    Carlsten, B.E.; Tallerico, P.J.

    1985-01-01

    A numerical analysis of large-signal klystron behavior based on general wave-particle interaction theory is presented. The computer code presented is tailored for the minimum amount of complexity needed in klystron simulation. The code includes self-consistent electron motion, space-charge fields, and intermediate and output fields. It also includes use of time periodicity to simplify the problem, accurate representation of the space-charge fields, accurate representation of the cavity standing-wave fields, and a sophisticated particle-pushing routine. In the paper, examples are given that show the effects of cavity detunings, of varying the magnetic field profile, of electron beam asymmetries from the gun, and of variations in external load impedance. 4 refs., 7 figs.

  10. Thermodynamically consistent continuum dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Hochrainer, Thomas

    2016-03-01

    Dislocation based modeling of plasticity is one of the central challenges at the crossover of materials science and continuum mechanics. Developing a continuum theory of dislocations requires the solution of two long standing problems: (i) to represent dislocation kinematics in terms of a reasonable number of variables and (ii) to derive averaged descriptions of the dislocation dynamics (i.e. material laws) in terms of these variables. The kinematic problem (i) was recently solved through the introduction of continuum dislocation dynamics (CDD), which provides kinematically consistent evolution equations of dislocation alignment tensors, presuming a given average dislocation velocity (Hochrainer, T., 2015, Multipole expansion of continuum dislocations dynamics in terms of alignment tensors. Philos. Mag. 95 (12), 1321-1367). In the current paper we demonstrate how a free energy formulation may be used to solve the dynamic closure problem (ii) in CDD. We do so exemplarily for the lowest order CDD variant for curved dislocations in a single slip situation. In this case, a thermodynamically consistent average dislocation velocity is found to comprise five mesoscopic shear stress contributions. For a postulated free energy expression we identify among these stress contributions a back-stress term and a line-tension term, both of which have already been postulated for CDD. A new stress contribution occurs which is missing in earlier CDD models including the statistical continuum theory of straight parallel edge dislocations (Groma, I., Csikor, F.F., Zaiser, M., 2003. Spatial correlations and higher-order gradient terms in a continuum description of dislocation dynamics. Acta Mater. 51, 1271-1281). Furthermore, two entirely new stress contributions arise from the curvature of dislocations.

  11. Assessing Class-Wide Consistency and Randomness in Responses to True or False Questions Administered Online

    ERIC Educational Resources Information Center

    Pawl, Andrew; Teodorescu, Raluca E.; Peterson, Joseph D.

    2013-01-01

    We have developed simple data-mining algorithms to assess the consistency and the randomness of student responses to problems consisting of multiple true or false statements. In this paper we describe the algorithms and use them to analyze data from introductory physics courses. We investigate statements that emerge as outliers because the class…

  12. Depth consistency evaluation for error-pose detection

    NASA Astrophysics Data System (ADS)

    Jin, Sou-Young; Choi, Ho-Jin; Iraqi, Youssef

    2013-12-01

    With the development of depth sensors, i.e. Kinect, it is now possible to predict human body poses from a depthmap without any manual labeling. The predicted poses can be used as meaningful features for many applications such as human action recognition. However, existing pose estimation algorithms are not perfect, which can seriously affect the performance of its following applications. In this paper, we propose a novel method to detect erroneous poses. Human poses are captured by Kinect SDK which predicts body joints and connects them with straight lines to represent a pose. We observe depth gradient of pixels located on a body part is consistent when the body part is predicted correctly. With this observation, our algorithm examines depth gradients of pixels on each body part. During the depth gradient processing, our algorithm also considers occlusions. Once a sudden change is detected in depth values on a body part, we check whether the gradient is still consistent excluding the sudden change region. We tested our algorithm on many human activities and our experimental results show that our algorithm acceptably detects erroneous poses in real time.

  13. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    PubMed

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  14. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  15. Alternating minimization algorithm for speckle reduction with a shifting technique.

    PubMed

    Woo, Hyenkyun; Yun, Sangwoon

    2012-04-01

    Speckles (multiplicative noise) in synthetic aperture radar (SAR) make it difficult to interpret the observed image. Due to the edge-preserving feature of total variation (TV), variational models with TV regularization have attracted much interest in reducing speckles. Algorithms based on the augmented Lagrangian function have been proposed to efficiently solve speckle-reduction variational models with TV regularization. However, these algorithms require inner iterations or inverses involving the Laplacian operator at each iteration. In this paper, we adapt Tseng's alternating minimization algorithm with a shifting technique to efficiently remove the speckle without any inner iterations or inverses involving the Laplacian operator. The proposed method is very simple and highly parallelizable; therefore, it is very efficient to despeckle huge-size SAR images. Numerical results show that our proposed method outperforms the state-of-the-art algorithms for speckle-reduction variational models with a TV regularizer in terms of central-processing-unit time. PMID:22106149

  16. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance. PMID:27066339

  17. GRAVITATIONALLY CONSISTENT HALO CATALOGS AND MERGER TREES FOR PRECISION COSMOLOGY

    SciTech Connect

    Behroozi, Peter S.; Wechsler, Risa H.; Wu, Hao-Yi; Busha, Michael T.; Klypin, Anatoly A.; Primack, Joel R. E-mail: rwechsler@stanford.edu

    2013-01-20

    We present a new algorithm for generating merger trees and halo catalogs which explicitly ensures consistency of halo properties (mass, position, and velocity) across time steps. Our algorithm has demonstrated the ability to improve both the completeness (through detecting and inserting otherwise missing halos) and purity (through detecting and removing spurious objects) of both merger trees and halo catalogs. In addition, our method is able to robustly measure the self-consistency of halo finders; it is the first to directly measure the uncertainties in halo positions, halo velocities, and the halo mass function for a given halo finder based on consistency between snapshots in cosmological simulations. We use this algorithm to generate merger trees for two large simulations (Bolshoi and Consuelo) and evaluate two halo finders (ROCKSTAR and BDM). We find that both the ROCKSTAR and BDM halo finders track halos extremely well; in both, the number of halos which do not have physically consistent progenitors is at the 1%-2% level across all halo masses. Our code is publicly available at http://code.google.com/p/consistent-trees. Our trees and catalogs are publicly available at http://hipacc.ucsc.edu/Bolshoi/.

  18. A multilevel ant colony optimization algorithm for classical and isothermic DNA sequencing by hybridization with multiplicity information available.

    PubMed

    Kwarciak, Kamil; Radom, Marcin; Formanowicz, Piotr

    2016-04-01

    The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called "one and many" assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called "one, two and many", one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip. PMID:26878124

  19. Self-consistent flattened isochrones

    NASA Astrophysics Data System (ADS)

    Binney, James

    2014-05-01

    We present a family of self-consistent axisymmetric stellar systems that have analytic distribution functions (DFs) of the form f(J), so they depend on three integrals of motion and have triaxial velocity ellipsoids. The models, which are generalizations of Hénon's isochrone sphere, have four dimensionless parameters, two determining the part of the DF that is even in Lz and two determining the odd part of the DF (which determines the azimuthal velocity distribution). Outside their cores, the velocity ellipsoids of all models tend to point to the model's centre, and we argue that this behaviour is generic, so near the symmetry axis of a flattened model, the long axis of the velocity ellipsoid is naturally aligned with the symmetry axis and not perpendicular to it as in many published dynamical models of well-studied galaxies. By varying one of the DF parameters, the intensity of rotation can be increased from zero up to a maximum value set by the requirement that the DF be non-negative. Since angle-action coordinates are easily computed for these models, they are ideally suited for perturbative treatments and stability analysis. They can also be used to choose initial conditions for an N-body model that starts in perfect equilibrium, and to model observations of early-type galaxies. The modelling technique introduced here is readily extended to different radial density profiles, more complex kinematics and multicomponent systems. A number of important technical issues surrounding the determination of the models' observable properties are explained in two appendices.

  20. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.

    PubMed

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  1. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  2. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  3. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  4. A novel community health worker tool outperforms WHO clinical staging for assessment of antiretroviral therapy eligibility in a resource-limited setting.

    PubMed

    Macpherson, Peter; Lalloo, David G; Thindwa, Deus; Webb, Emily L; Squire, S Bertel; Chipungu, Geoffrey A; Desmond, Nicola; Makombe, Simon D; Taegtmeyer, Miriam; Choko, Augustine T; Corbett, Elizabeth L

    2014-02-01

    The accuracy of a novel community health worker antiretroviral therapy eligibility assessment tool was examined in community members in Blantyre, Malawi. Nurses independently performed World Health Organization (WHO) staging and CD4 counts. One hundred ten (55.6%) of 198 HIV-positive participants had a CD4 count of <350 cells per cubic millimeter. The community health worker tool significantly outperformed WHO clinical staging in identifying CD4 count of <350 cells per cubic millimeter in terms of sensitivity (41% vs. 19%), positive predictive value (75% vs. 68%), negative predictive values (53% vs. 47%), and area under the receiver-operator curve (0.62 vs. 0.54; P = 0.017). Reliance on WHO staging is likely to result in missed and delayed antiretroviral therapy initiation. PMID:23846567

  5. A graph spectrum based geometric biclustering algorithm.

    PubMed

    Wang, Doris Z; Yan, Hong

    2013-01-21

    Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285

  6. Improving the algorithm of temporal relation propagation

    NASA Astrophysics Data System (ADS)

    Shen, Jifeng; Xu, Dan; Liu, Tongming

    2005-03-01

    In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.

  7. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  8. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  9. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  10. Enhanced ATR algorithm for high resolution multi-band sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2008-04-01

    An improved automatic target recognition (ATR) processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering (SACF), normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. A new improvement was made to the processing string, data regularization, which entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction. The classified objects of 3 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization, which resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Volterra feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the ATR processing strings outperforms baseline summing and single-stage Volterra feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  11. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  12. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  13. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  14. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  15. Three hypothesis algorithm with occlusion reasoning for multiple people tracking

    NASA Astrophysics Data System (ADS)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael

    2015-01-01

    This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.

  16. An Improved Physarum polycephalum Algorithm for the Shortest Path Problem

    PubMed Central

    Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  17. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  18. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    PubMed Central

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  19. An improved Physarum polycephalum algorithm for the shortest path problem.

    PubMed

    Zhang, Xiaoge; Wang, Qing; Adamatzky, Andrew; Chan, Felix T S; Mahadevan, Sankaran; Deng, Yong

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  20. Phonological and morphological consistency in the acquisition of vowel duration spelling in Dutch and German.

    PubMed

    Landerl, Karin; Reitsma, Pieter

    2005-12-01

    In Dutch, vowel duration spelling is phonologically consistent but morphologically inconsistent (e.g., paar-paren). In German, it is phonologically inconsistent but morphologically consistent (e.g., Paar-Paare). Contrasting the two orthographies allowed us to examine the role of phonological and morphological consistency in the acquisition of the same orthographic feature. Dutch and German children in Grades 2 to 4 spelled singular and plural word forms and in a second task identified the correct spelling of singular and plural forms of the same nonword. Dutch children were better in word spelling, but German children outperformed the Dutch children in nonword selection. Also, whereas German children performed on a similar level for singular and plural items, Dutch children showed a large discrepancy. The results indicate that children use phonological and morphological rules from an early age but that the developmental balance between the two sources of information is constrained by the specific orthography. PMID:15975590

  1. Surface-consistent matching filters for time-lapse processing

    NASA Astrophysics Data System (ADS)

    Al Mutlaq, Mahdi H.

    The problem of mismatch between repeated time-lapse seismic surveys remains a challenge, particularly for land acquisition. In this dissertation, we present a new algorithm, which is an extension of the surface-consistent model, and which minimizes the mismatch between surveys, hence improving repeatability. We introduce the concept of surface-consistent matching filters (SCMF) for processing time-lapse seismic data, where matching filters are convolutional filters that minimize the sum-squared error between two signals. Since in the Fourier domain, a matching filter is the spectral ratio of the two signals, we extend the well known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two datasets instead of only one (i.e. surface-consistent deconvolution). To avoid unstable division of spectra, we compute the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factors the trace-sequential matching filters into four operators: two surface-consistent (source and receiver), and two subsurface-consistent (offset and midpoint). We apply the algorithm to two datasets: a synthetic time-lapse model and field data from a CO2 monitoring site in Northern Alberta. In addition, two common time-lapse processing schemes (independent processing and simultaneous processing) are compared. We present a modification of the simultaneous processing scheme as a direct result of applying the new SCMF algorithm. The results of applying the SCMF together with the new modified simultaneous processing flow reveal the potential benefit of the method, however some challenges remain, specifically in the presence of random noise.

  2. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  3. Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier

    NASA Astrophysics Data System (ADS)

    Peng, Yanbin

    Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.

  4. New validation algorithm for data association in SLAM.

    PubMed

    Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni

    2013-09-01

    In this work, a novel data validation algorithm for a single-camera SLAM system is introduced. A 6-degree-of-freedom monocular SLAM method based on the delayed inverse-depth (DI-D) feature initialization is used as a benchmark. This SLAM methodology has been improved with the introduction of the proposed data association batch validation technique, the highest order hypothesis compatibility test, HOHCT. This new algorithm is based on the evaluation of statistically compatible hypotheses, and a search algorithm designed to exploit the characteristics of delayed inverse-depth technique. In order to show the capabilities of the proposed technique, experimental tests have been compared with classical methods. The results of the proposed technique outperformed the results of the classical approaches. PMID:23701896

  5. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  6. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  7. The index-based subgraph matching algorithm (ISMA): fast subgraph enumeration in large networks using optimized search trees.

    PubMed

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  8. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    PubMed Central

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  9. Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei

    2007-12-01

    Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.

  10. Voronoi particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Luu, Phuc T.; Tückmantel, T.; Pukhov, A.

    2016-05-01

    We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of the two-stream instability and the magnetic shower.

  11. Applications of genetic algorithms and neural networks to interatomic potentials

    NASA Astrophysics Data System (ADS)

    Hobday, Steven; Smith, Roger; BelBruno, Joe

    1999-06-01

    Applications of two modern artificial intelligence (AI) techniques, genetic algorithms (GA) and neural networks (NN) to computer simulations are reported. It is shown that the GA are very useful tools for determining the minimum energy structures of clusters of atoms described by interatomic potential functions and generally outperform other optimisation methods for this task. A number of applications are given including covalent, and close packed structures of single or multi-component atomic species. It is also shown that (many body) interatomic potential functions for multi-component systems can be derived by training a specially constructed NN on a variety of structural data.

  12. Adult Cleaner Wrasse Outperform Capuchin Monkeys, Chimpanzees and Orang-utans in a Complex Foraging Task Derived from Cleaner – Client Reef Fish Cooperation

    PubMed Central

    Proctor, Darby; Essler, Jennifer; Pinto, Ana I.; Wismer, Sharon; Stoinski, Tara; Brosnan, Sarah F.; Bshary, Redouan

    2012-01-01

    The insight that animals' cognitive abilities are linked to their evolutionary history, and hence their ecology, provides the framework for the comparative approach. Despite primates renowned dietary complexity and social cognition, including cooperative abilities, we here demonstrate that cleaner wrasse outperform three primate species, capuchin monkeys, chimpanzees and orang-utans, in a foraging task involving a choice between two actions, both of which yield identical immediate rewards, but only one of which yields an additional delayed reward. The foraging task decisions involve partner choice in cleaners: they must service visiting client reef fish before resident clients to access both; otherwise the former switch to a different cleaner. Wild caught adult, but not juvenile, cleaners learned to solve the task quickly and relearned the task when it was reversed. The majority of primates failed to perform above chance after 100 trials, which is in sharp contrast to previous studies showing that primates easily learn to choose an action that yields immediate double rewards compared to an alternative action. In conclusion, the adult cleaners' ability to choose a superior action with initially neutral consequences is likely due to repeated exposure in nature, which leads to specific learned optimal foraging decision rules. PMID:23185293

  13. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  14. Another hybrid conjugate gradient algorithm for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Andrei, Neculai

    2008-02-01

    Another hybrid conjugate gradient algorithm is subject to analysis. The parameter ? k is computed as a convex combination of beta ^{{HS}}_{k} (Hestenes-Stiefel) and beta ^{{DY}}_{k} (Dai-Yuan) algorithms, i.eE beta ^{C}_{k} = {left( {1 - theta _{k} } right)}beta ^{{HS}}_{k} + theta _{k} beta ^{{DY}}_{k} . The parameter ? k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation nabla ^{2} f{left( {x_{{k + 1}} } right)}s_{k} = y_{k} , where s_{k} = x_{{k + 1}} - x_{k} and y_{k} = g_{{k + 1}} - g_{k} . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.

  15. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research

    PubMed Central

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-01-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values. PMID:27444576

  16. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research

    NASA Astrophysics Data System (ADS)

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-07-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values.

  17. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research.

    PubMed

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-01-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values. PMID:27444576

  18. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  19. High-performance speech recognition using consistency modeling

    NASA Astrophysics Data System (ADS)

    Digilakis, Vassilios; Monaco, Peter; Murveit, Hy; Weintraub, Mitchel

    1994-03-01

    The goal of this project conducted by SRI International (SRI) is to develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech-recognition algorithms so that the resulting speech-recognition hypotheses are more self-consistent and, therefore, more accurate. Consistency is achieved by conditioning HMM output distributions on state and observations histories, P(x/s,H). The technical objective of the project is to find the proper form of the probability distribution, P; the proper history vector, H; the proper feature vector, x; and to develop the infrastructure (e.g. efficient estimation and search techniques) so that consistency modeling can be effectively used. During the first year of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We developed genonic hidden Markov model (HMM) technology, our choice for P above, and Progressive Search technology for HMM systems which allows us to develop and use complex HMM formulations in an efficient manner. Papers describing these two techniques are included in the appendix of this report and are briefly summarized below. This report also describes other accomplishments of Year 1 including the initial exploitation of discrete and continuous consistency modeling and the development of a scheme for efficiently computing Gaussian probabilities.

  20. [Psychometric properties of a scale: internal consistency].

    PubMed

    Campo-Arias, Adalberto; Oviedo, Heidi C

    2008-01-01

    Internal consistency reliability is the degree of correlation between a scale's items. Internal consistency is calculated by Kuder-Richardson's formula 20 for dichotomous choices and Cronbach's alpha for polytomous items. 0.70 to 0.90 internal consistency is acceptable. 5-25 participants are needed for each item when computing the internal consistency of a twenty-item scale. Internal consistency varies according to population and then it is necessary to report it always that scale is used. PMID:19360231

  1. Quality and Consistency of the NASA Ocean Color Data Record

    NASA Technical Reports Server (NTRS)

    Franz, Bryan A.

    2012-01-01

    The NASA Ocean Biology Processing Group (OBPG) recently reprocessed the multimission ocean color time-series from SeaWiFS, MODIS-Aqua, and MODIS-Terra using common algorithms and improved instrument calibration knowledge. Here we present an analysis of the quality and consistency of the resulting ocean color retrievals, including spectral water-leaving reflectance, chlorophyll a concentration, and diffuse attenuation. Statistical analysis of satellite retrievals relative to in situ measurements will be presented for each sensor, as well as an assessment of consistency in the global time-series for the overlapping periods of the missions. Results will show that the satellite retrievals are in good agreement with in situ measurements, and that the sensor ocean color data records are highly consistent over the common mission lifespan for the global deep oceans, but with degraded agreement in higher productivity, higher complexity coastal regions.

  2. Linear Multigrid Techniques in Self-consistent Electronic Structure Calculations

    SciTech Connect

    Fattebert, J-L

    2000-05-23

    Ab initio DFT electronic structure calculations involve an iterative process to solve the Kohn-Sham equations for an Hamiltonian depending on the electronic density. We discretize these equations on a grid by finite differences. Trial eigenfunctions are improved at each step of the algorithm using multigrid techniques to efficiently reduce the error at all length scale, until self-consistency is achieved. In this paper we focus on an iterative eigensolver based on the idea of inexact inverse iteration, using multigrid as a preconditioner. We also discuss how this technique can be used for electrons described by general non-orthogonal wave functions, and how that leads to a linear scaling with the system size for the computational cost of the most expensive parts of the algorithm.

  3. Chinese Tallow Trees (Triadica sebifera) from the Invasive Range Outperform Those from the Native Range with an Active Soil Community or Phosphorus Fertilization

    PubMed Central

    Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan

    2013-01-01

    Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m2), phosphorus (control or 0.5 g/m2), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however, an

  4. Ultimate failure of the Lévy Foraging Hypothesis: Two-scale searching strategies outperform scale-free ones even when prey are scarce and cryptic.

    PubMed

    Benhamou, Simon; Collet, Julien

    2015-12-21

    The "Lévy Foraging Hypothesis" promotes Lévy walk (LW) as the best strategy to forage for patchily but unpredictably located prey. This strategy mixes extensive and intensive searching phases in a mostly cue-free way through strange, scale-free kinetics. It is however less efficient than a cue-driven two-scale Composite Brownian walk (CBW) when the resources encountered are systematically detected. Nevertheless, it could be assumed that the intrinsic capacity of LW to trigger cue-free intensive searching at random locations might be advantageous when resources are not only scarcely encountered but also so cryptic that the probability to detect those encountered during movement is low. Surprisingly, this situation, which should be quite common in natural environments, has almost never been studied. Only a few studies have considered "saltatory" foragers, which are fully "blind" while moving and thus detect prey only during scanning pauses, but none of them compared the efficiency of LW vs. CBW in this context or in less extreme contexts where the detection probability during movement is not null but very low. In a study based on computer simulations, we filled the bridge between the concepts of "pure continuous" and "pure saltatory" foraging by considering that the probability to detect resources encountered while moving may range from 0 to 1. We showed that regularly stopping to scan the environment can indeed improve efficiency, but only at very low detection probabilities. Furthermore, the LW is then systematically outperformed by a mixed cue-driven/internally-driven CBW. It is thus more likely that evolution tends to favour strategies that rely on environmental feedbacks rather than on strange kinetics. PMID:26463680

  5. Chinese tallow trees (Triadica sebifera) from the invasive range outperform those from the native range with an active soil community or phosphorus fertilization.

    PubMed

    Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan

    2013-01-01

    Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m(2)), phosphorus (control or 0.5 g/m(2)), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however

  6. Droplet digital polymerase chain reaction (PCR) outperforms real-time PCR in the detection of environmental DNA from an invasive fish species.

    PubMed

    Doi, Hideyuki; Takahara, Teruhiko; Minamoto, Toshifumi; Matsuhashi, Saeko; Uchii, Kimiko; Yamanaka, Hiroki

    2015-05-01

    Environmental DNA (eDNA) has been used to investigate species distributions in aquatic ecosystems. Most of these studies use real-time polymerase chain reaction (PCR) to detect eDNA in water; however, PCR amplification is often inhibited by the presence of organic and inorganic matter. In droplet digital PCR (ddPCR), the sample is partitioned into thousands of nanoliter droplets, and PCR inhibition may be reduced by the detection of the end-point of PCR amplification in each droplet, independent of the amplification efficiency. In addition, real-time PCR reagents can affect PCR amplification and consequently alter detection rates. We compared the effectiveness of ddPCR and real-time PCR using two different PCR reagents for the detection of the eDNA from invasive bluegill sunfish, Lepomis macrochirus, in ponds. We found that ddPCR had higher detection rates of bluegill eDNA in pond water than real-time PCR with either of the PCR reagents, especially at low DNA concentrations. Limits of DNA detection, which were tested by spiking the bluegill DNA to DNA extracts from the ponds containing natural inhibitors, found that ddPCR had higher detection rate than real-time PCR. Our results suggest that ddPCR is more resistant to the presence of PCR inhibitors in field samples than real-time PCR. Thus, ddPCR outperforms real-time PCR methods for detecting eDNA to document species distributions in natural habitats, especially in habitats with high concentrations of PCR inhibitors. PMID:25850372

  7. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  8. New Enhanced Artificial Bee Colony (JA-ABC5) Algorithm with Application for Reactive Power Optimization

    PubMed Central

    2015-01-01

    The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054

  9. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  10. Evaluating Machine Learning Regression Algorithms for Operational Retrieval of Biophysical Parameters: Opportunities for Sentinel

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, J. P.; Alonso, L.; Guanter, L.; Moreno, J.

    2012-04-01

    ESA’s upcoming satellites Sentinel-2 (S2) and Sentinel-3 (S3) aim to ensure continuity for Landsat 5/7, SPOT- 5, SPOT-Vegetation and Envisat MERIS observations by providing superspectral images of high spatial and temporal resolution. S2 and S3 will deliver near real-time operational products with a high accuracy for land monitoring. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods. Machine learning regression algorithms could be powerful candidates for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting. By using data from the ESA-led field campaign SPARC (Barrax, Spain), it was recently found [1] that Gaussian processes regression (GPR) outperformed competitive machine learning algorithms such as neural networks, support vector regression) and kernel ridge regression both in terms of accuracy and computational speed. For various Sentinel configurations (S2-10m, S2- 20m, S2-60m and S3-300m) three important biophysical parameters were estimated: leaf chlorophyll content (Chl), leaf area index (LAI) and fractional vegetation cover (FVC). GPR was the only method that reached the 10% precision required by end users in the estimation of Chl. In view of implementing the regressor into operational monitoring applications, here the portability of locally trained GPR models to other images was evaluated. The associated confidence maps proved to be a good indicator for evaluating the robustness of the trained models. Consistent retrievals were obtained across the different images, particularly over agricultural sites. To make the method suitable for operational use, however, the poorer confidences over bare soil areas suggest that the training dataset should be expanded with inputs from various land cover types.

  11. ETD: an extended time delay algorithm for ventricular fibrillation detection.

    PubMed

    Kim, Jungyoon; Chu, Chao-Hsien

    2014-01-01

    Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480

  12. Pathway-Dependent Effectiveness of Network Algorithms for Gene Prioritization

    PubMed Central

    Shim, Jung Eun; Hwang, Sohyun; Lee, Insuk

    2015-01-01

    A network-based approach has proven useful for the identification of novel genes associated with complex phenotypes, including human diseases. Because network-based gene prioritization algorithms are based on propagating information of known phenotype-associated genes through networks, the pathway structure of each phenotype might significantly affect the effectiveness of algorithms. We systematically compared two popular network algorithms with distinct mechanisms – direct neighborhood which propagates information to only direct network neighbors, and network diffusion which diffuses information throughout the entire network – in prioritization of genes for worm and human phenotypes. Previous studies reported that network diffusion generally outperforms direct neighborhood for human diseases. Although prioritization power is generally measured for all ranked genes, only the top candidates are significant for subsequent functional analysis. We found that high prioritizing power of a network algorithm for all genes cannot guarantee successful prioritization of top ranked candidates for a given phenotype. Indeed, the majority of the phenotypes that were more efficiently prioritized by network diffusion showed higher prioritizing power for top candidates by direct neighborhood. We also found that connectivity among pathway genes for each phenotype largely determines which network algorithm is more effective, suggesting that the network algorithm used for each phenotype should be chosen with consideration of pathway gene connectivity. PMID:26091506

  13. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  14. A Procedure for Estimating Intrasubject Behavior Consistency

    ERIC Educational Resources Information Center

    Hernandez, Jose M.; Rubio, Victor J.; Revuelta, Javier; Santacreu, Jose

    2006-01-01

    Trait psychology implicitly assumes consistency of the personal traits. Mischel, however, argued against the idea of a general consistency of human beings. The present article aims to design a statistical procedure based on an adaptation of the pi* statistic to measure the degree of intraindividual consistency independently of the measure used.…

  15. 15 CFR 930.57 - Consistency certifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT FEDERAL CONSISTENCY WITH APPROVED COASTAL MANAGEMENT PROGRAMS Consistency for Activities Requiring... consistent with the management program. At the same time, the applicant shall furnish to the State agency...

  16. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  17. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  18. A total variation diminishing finite difference algorithm for sonic boom propagation models

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1993-01-01

    It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.

  19. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  20. Growth algorithms for lattice heteropolymers at low temperatures

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Mehra, Vishal; Nadler, Walter; Grassberger, Peter

    2003-01-01

    Two improved versions of the pruned-enriched-Rosenbluth method (PERM) are proposed and tested on simple models of lattice heteropolymers. Both are found to outperform not only the previous version of PERM, but also all other stochastic algorithms which have been employed on this problem, except for the core directed chain growth method (CG) of Beutler and Dill. In nearly all test cases they are faster in finding low-energy states, and in many cases they found new lowest energy states missed in previous papers. The CG method is superior to our method in some cases, but less efficient in others. On the other hand, the CG method uses heavily heuristics based on presumptions about the hydrophobic core and does not give thermodynamic properties, while the present method is a fully blind general purpose algorithm giving correct Boltzmann-Gibbs weights, and can be applied in principle to any stochastic sampling problem.

  1. Genetic Algorithms: A New Method for Neutron Beam Spectral Characterization

    SciTech Connect

    David W. Freeman

    2000-06-04

    A revolutionary new concept for solving the neutron spectrum unfolding problem using genetic algorithms (GAs) has recently been introduced. GAs are part of a new field of evolutionary solution techniques that mimic living systems with computer-simulated chromosome solutions that mate, mutate, and evolve to create improved solutions. The original motivation for the research was to improve spectral characterization of neutron beams associated with boron neutron capture therapy (BNCT). The GA unfolding technique has been successfully applied to problems with moderate energy resolution (up to 47 energy groups). Initial research indicates that the GA unfolding technique may well be superior to popular unfolding methods in common use. Research now under way at Kansas State University is focused on optimizing the unfolding algorithm and expanding its energy resolution to unfold detailed beam spectra based on multiple foil measurements. Indications are that the final code will significantly outperform current, state-of-the-art codes in use by the scientific community.

  2. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy. PMID:26037060

  3. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2011-12-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  4. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2012-01-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  5. Memetic algorithms for ligand expulsion from protein cavities

    NASA Astrophysics Data System (ADS)

    Rydzewski, J.; Nowak, W.

    2015-09-01

    Ligand diffusion through a protein interior is a fundamental process governing biological signaling and enzymatic catalysis. A complex topology of channels in proteins leads often to difficulties in modeling ligand escape pathways by classical molecular dynamics simulations. In this paper, two novel memetic methods for searching the exit paths and cavity space exploration are proposed: Memory Enhanced Random Acceleration (MERA) Molecular Dynamics (MD) and Immune Algorithm (IA). In MERA, a pheromone concept is introduced to optimize an expulsion force. In IA, hybrid learning protocols are exploited to predict ligand exit paths. They are tested on three protein channels with increasing complexity: M2 muscarinic G-protein-coupled receptor, enzyme nitrile hydratase, and heme-protein cytochrome P450cam. In these cases, the memetic methods outperform simulated annealing and random acceleration molecular dynamics. The proposed algorithms are general and appropriate in all problems where an accelerated transport of an object through a network of channels is studied.

  6. Memetic algorithms for ligand expulsion from protein cavities.

    PubMed

    Rydzewski, J; Nowak, W

    2015-09-28

    Ligand diffusion through a protein interior is a fundamental process governing biological signaling and enzymatic catalysis. A complex topology of channels in proteins leads often to difficulties in modeling ligand escape pathways by classical molecular dynamics simulations. In this paper, two novel memetic methods for searching the exit paths and cavity space exploration are proposed: Memory Enhanced Random Acceleration (MERA) Molecular Dynamics (MD) and Immune Algorithm (IA). In MERA, a pheromone concept is introduced to optimize an expulsion force. In IA, hybrid learning protocols are exploited to predict ligand exit paths. They are tested on three protein channels with increasing complexity: M2 muscarinic G-protein-coupled receptor, enzyme nitrile hydratase, and heme-protein cytochrome P450cam. In these cases, the memetic methods outperform simulated annealing and random acceleration molecular dynamics. The proposed algorithms are general and appropriate in all problems where an accelerated transport of an object through a network of channels is studied. PMID:26428990

  7. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  8. Consistent 2-D phase unwrapping guided by a qualtiy map

    SciTech Connect

    Flynn, T.J.

    1995-12-31

    The problem of 2-D phase unwrapping arises when a spatially varying quantity is measured modulo some period. One needs to reconstruct a smooth unwrapped phase, consistent with the original data, by adding a multiple of the period to each sample. Smoothness typically cannot be enforced over all of the scene, due to noise and localized jumps. An unwrapping algorithm may form a mask within which phase discontinuities are allowed. In interferometry a quality map is available, indicating the reliability of the measurements. In this case, the mask should be contained as much as possible in areas of low quality. This paper presents an algorithm for phase unwrapping in which the mask design is guided by the quality map. The mask is grown from the residues (as defined by Goldstein et al.) into areas where the quality is below a threshold. A connected component of the mask stops growing when its residue charge becomes balanced. The threshold is raised as necessary to allow growth. This stage terminates when all components are balanced. The mask is then thinned by removing points that are not needed to cover the residues correctly. The unwrapped phase is found by simple I-D unwrapping along paths that avoid the mask. We present an example solution found by the algorithm and discuss possible modifications.

  9. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  10. Algorithmic Animation in Education--Review of Academic Experience

    ERIC Educational Resources Information Center

    Esponda-Arguero, Margarita

    2008-01-01

    This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…

  11. An Experimental Method for the Active Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  12. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use

  13. Algorithm for polarimetry data inversion, consistent with other measuring techniques in tokamak plasma

    NASA Astrophysics Data System (ADS)

    Kravtsov, Yu. A.; Chrzanowski, J.; Mazon, D.

    2011-06-01

    New procedure for plasma polarimetry data inversion is suggested, which fits two parameter knowledge-based plasma model to the measured parameters (azimuthal and ellipticity angles) of the polarization ellipse. The knowledge-based model is supposed to use the magnetic field and electron density profiles, obtained from magnetic measurements and LIDAR data on the Thomson scattering. In distinction to traditional polarimetry, polarization evolution along the ray is determined on the basis of angular variables technique (AVT). The paper contains a few examples of numerical solutions of these equations, which are applicable in conditions, when Faraday and Cotton-Mouton effects are simultaneously strong.

  14. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  15. Student Effort, Consistency, and Online Performance

    ERIC Educational Resources Information Center

    Patron, Hilde; Lopez, Salvador

    2011-01-01

    This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas…

  16. 40 CFR 55.12 - Consistency updates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Consistency updates. 55.12 Section 55.12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.12 Consistency updates. (a) The Administrator will...

  17. 40 CFR 55.12 - Consistency updates.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Consistency updates. 55.12 Section 55.12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) OUTER CONTINENTAL SHELF AIR REGULATIONS § 55.12 Consistency updates. (a) The Administrator will update this part as necessary to maintain...

  18. 15 CFR 930.96 - Consistency review.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Consistency review. 930.96 Section 930... and Local Governments § 930.96 Consistency review. (a)(1) If the State agency does not object to the proposed activity, the Federal agency may grant the federal assistance to the applicant...

  19. 15 CFR 930.96 - Consistency review.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Consistency review. 930.96 Section 930... and Local Governments § 930.96 Consistency review. (a)(1) If the State agency does not object to the proposed activity, the Federal agency may grant the federal assistance to the applicant...

  20. 15 CFR 930.96 - Consistency review.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Consistency review. 930.96 Section 930... and Local Governments § 930.96 Consistency review. (a)(1) If the State agency does not object to the proposed activity, the Federal agency may grant the federal assistance to the applicant...

  1. 15 CFR 930.96 - Consistency review.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Consistency review. 930.96 Section 930... and Local Governments § 930.96 Consistency review. (a)(1) If the State agency does not object to the proposed activity, the Federal agency may grant the federal assistance to the applicant...

  2. 15 CFR 930.96 - Consistency review.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Consistency review. 930.96 Section 930.96 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT FEDERAL CONSISTENCY WITH APPROVED...

  3. Consistency and Enhancement Processes in Understanding Emotions

    ERIC Educational Resources Information Center

    Stets, Jan E.; Asencio, Emily K.

    2008-01-01

    Many theories in the sociology of emotions assume that emotions emerge from the cognitive consistency principle. Congruence among cognitions produces good feelings whereas incongruence produces bad feelings. A work situation is simulated in which managers give feedback to workers that is consistent or inconsistent with what the workers expect to…

  4. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  5. A Novel Tracking Algorithm via Feature Points Matching

    PubMed Central

    Luo, Nan; Sun, Quansen; Chen, Qiang; Ji, Zexuan; Xia, Deshen

    2015-01-01

    Visual target tracking is a primary task in many computer vision applications and has been widely studied in recent years. Among all the tracking methods, the mean shift algorithm has attracted extraordinary interest and been well developed in the past decade due to its excellent performance. However, it is still challenging for the color histogram based algorithms to deal with the complex target tracking. Therefore, the algorithms based on other distinguishing features are highly required. In this paper, we propose a novel target tracking algorithm based on mean shift theory, in which a new type of image feature is introduced and utilized to find the corresponding region between the neighbor frames. The target histogram is created by clustering the features obtained in the extraction strategy. Then, the mean shift process is adopted to calculate the target location iteratively. Experimental results demonstrate that the proposed algorithm can deal with the challenging tracking situations such as: partial occlusion, illumination change, scale variations, object rotation and complex background clutter. Meanwhile, it outperforms several state-of-the-art methods. PMID:25617769

  6. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  7. Stochastic inverse consistency in medical image registration.

    PubMed

    Yeung, Sai Kit; Shi, Pengcheng

    2005-01-01

    An essential goal in medical image registration is, the forward and reverse mapping matrices should be inverse to each other, i.e., inverse consistency. Conventional approaches enforce consistency in deterministic fashions, through incorporation of sub-objective cost function to impose source-destination symmetric property during the registration process. Assuming that the initial forward and reverse matching matrices have been computed and used as the inputs to our system, this paper presents a stochastic framework which yields perfect inverse consistency with the simultaneous considerations of the errors underneath the registration matrices and the imperfectness of the consistent constraint. An iterative generalized total least square (GTLS) strategy has been developed such that the inverse consistency is optimally imposed. PMID:16685959

  8. Managing consistency in collaborative design environments

    NASA Astrophysics Data System (ADS)

    Miao, Chunyan; Yang, Zhonghua; Goh, Angela; Sun, Chengzheng; Sattar, Abdul

    1999-08-01

    In today's global economy, there is a significant paradigm shift to collaborative engineering design environments. One of key issues in the collaborative setting is the consistency model, which governs how to coordinate the activities of collaborators to ensure that they do not make inconsistent changes or updates to the shared objects. In this paper, we present a new consistency model which requires that all update operations will be executed in the casual order (causality) and all participants have the same view on the operations on the shared objects (view synchrony). A simple multicast-based protocol to implement the consistency model is presented. By employing vector time and token mechanisms, the protocol brings the shared objects from one consistent state to another, thus providing collaborators with a consistent view of the shared objects. A CORBA-based on-going prototyping implementation is outlined. Some of the related work are also discussed.

  9. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  10. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  11. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  12. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  13. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  14. DELTA: A Distal Enhancer Locating Tool Based on AdaBoost Algorithm and Shape Features of Chromatin Modifications

    PubMed Central

    Lu, Yiming; Qu, Wubin; Shan, Guangyu; Zhang, Chenggang

    2015-01-01

    Accurate identification of DNA regulatory elements becomes an urgent need in the post-genomic era. Recent genome-wide chromatin states mapping efforts revealed that DNA elements are associated with characteristic chromatin modification signatures, based on which several approaches have been developed to predict transcriptional enhancers. However, their practical application is limited by incomplete extraction of chromatin features and model inconsistency for predicting enhancers across different cell types. To address these issues, we define a set of non-redundant shape features of histone modifications, which shows high consistency across cell types and can greatly reduce the dimensionality of feature vectors. Integrating shape features with a machine-learning algorithm AdaBoost, we developed an enhancer predicting method, DELTA (Distal Enhancer Locating Tool based on AdaBoost). We show that DELTA significantly outperforms current enhancer prediction methods in prediction accuracy on different datasets and can predict enhancers in one cell type using models trained in other cell types without loss of accuracy. Overall, our study presents a novel framework for accurately identifying enhancers from epigenetic data across multiple cell types. PMID:26091399

  15. Binary Bees Algorithm - bioinspiration from the foraging mechanism of honeybees to optimize a multiobjective multidimensional assignment problem

    NASA Astrophysics Data System (ADS)

    Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan

    2011-11-01

    The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.

  16. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    PubMed

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms. PMID:25014985

  17. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  19. An algorithm for robust and efficient location of T-wave ends in electrocardiograms.

    PubMed

    Zhang, Qinghua; Manriquez, Alfredo Illanes; Médigue, Claire; Papelier, Yves; Sorine, Michel

    2006-12-01

    The purpose of this paper is to propose a new algorithm for T-wave end location in electrocardiograms, mainly through the computation of an indicator related to the area covered by the T-wave curve. Based on simple assumptions, essentially on the concavity of the T-wave form, it is formally proved that the maximum of the computed indicator inside each cardiac cycle coincides with the T-wave end. Moreover, the algorithm is robust to acquisition noise, to wave form morphological variations and to baseline wander. It is also computationally very simple: the main computation can be implemented as a simple finite impulse response filter. When evaluated with the PhysioNet QT database in terms of the mean and the standard deviation of the T-wave end location errors, the proposed algorithm outperforms the other algorithms evaluated with the same database, according to the most recent available publications up to our knowledge. PMID:17153212

  20. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    PubMed

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  1. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  2. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy

    PubMed Central

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  3. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  4. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  5. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm.

    PubMed

    Wang, Jiaxi; Lin, Boliang; Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  6. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  7. Design and Implementation of Broadcast Algorithms for Extreme-Scale Systems

    SciTech Connect

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua

    2011-01-01

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementation of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.

  8. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2011-12-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  9. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2012-01-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  10. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  12. On the initial state and consistency relations

    SciTech Connect

    Berezhiani, Lasha; Khoury, Justin E-mail: jkhoury@sas.upenn.edu

    2014-09-01

    We study the effect of the initial state on the consistency conditions for adiabatic perturbations. In order to be consistent with the constraints of General Relativity, the initial state must be diffeomorphism invariant. As a result, we show that initial wavefunctional/density matrix has to satisfy a Slavnov-Taylor identity similar to that of the action. We then investigate the precise ways in which modified initial states can lead to violations of the consistency relations. We find two independent sources of violations: i) the state can include initial non-Gaussianities; ii) even if the initial state is Gaussian, such as a Bogoliubov state, the modified 2-point function can modify the q-vector → 0 analyticity properties of the vertex functional and result in violations of the consistency relations.

  13. Ensuring the Consistency of Silicide Coatings

    NASA Technical Reports Server (NTRS)

    Ramani, V.; Lampson, F. K.

    1982-01-01

    Diagram specifies optimum fusing time for given thicknesses of refractory metal-silicide coatings on columbium C-103 substrates. Adherence to indicated fusion times ensures consistent coatings and avoids underdiffusion and overdiffusion. Accuracy of diagram has been confirmed by tests.

  14. Consistent stabilizability of switched Boolean networks.

    PubMed

    Li, Haitao; Wang, Yuzhen

    2013-10-01

    This paper investigates the consistent stabilizability of switched Boolean networks (SBNs) by using the semi-tensor product method, and presents a number of new results. First, an algebraic expression of SBNs is obtained by the semi-tensor product, based on which the consistent stabilizability is then studied for SBNs and some necessary and sufficient conditions are presented for the design of free-form and state-feedback switching signals, respectively. Finally, the consistent stabilizability of SBNs with state constraints is considered and some necessary and sufficient conditions are proposed. The study of illustrative examples shows that the new results obtained in this paper are very effective in designing switching signals for the consistent stabilizability of SBNs. PMID:23787170

  15. Consistency relations for non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Li, Miao; Wang, Yi

    2008-09-01

    We investigate consistency relations for non-Gaussianity. We provide a model-independent dynamical proof for the consistency relation for three-point correlation functions from the Hamiltonian and field redefinition. This relation can be applied to single-field inflation, multi-field inflation and the curvaton scenario. This relation can also be generalized to n-point correlation functions up to arbitrary order in perturbation theory and with arbitrary number of loops.

  16. Smooth transitions between bump rendering algorithms

    SciTech Connect

    Becker, B.G. Max, N.L. |

    1993-01-04

    A method is described for switching smoothly between rendering algorithms as required by the amount of visible surface detail. The result will be more realism with less computation for displaying objects whose surface detail can be described by one or more bump maps. The three rendering algorithms considered are bidirectional reflection distribution function (BRDF), bump-mapping, and displacement-mapping. The bump-mapping has been modified to make it consistent with the other two. For a given viewpoint, one of these algorithms will show a better trade-off between quality, computation time, and aliasing than the other two. Thus, it needs to be determined for any given viewpoint which regions of the object(s) will be rendered with each algorithm The decision as to which algorithm is appropriate is a function of distance, viewing angle, and the frequency of bumps in the bump map.

  17. Enhanced probability-selection artificial bee colony algorithm for economic load dispatch: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ghani Abro, Abdul; Mohamad-Saleh, Junita

    2014-10-01

    The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.

  18. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  19. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  20. Integrating perspectives on vocal performance and consistency

    PubMed Central

    Sakata, Jon T.; Vehrencamp, Sandra L.

    2012-01-01

    SUMMARY Recent experiments in divergent fields of birdsong have revealed that vocal performance is important for reproductive success and under active control by distinct neural circuits. Vocal consistency, the degree to which the spectral properties (e.g. dominant or fundamental frequency) of song elements are produced consistently from rendition to rendition, has been highlighted as a biologically important aspect of vocal performance. Here, we synthesize functional, developmental and mechanistic (neurophysiological) perspectives to generate an integrated understanding of this facet of vocal performance. Behavioral studies in the field and laboratory have found that vocal consistency is affected by social context, season and development, and, moreover, positively correlated with reproductive success. Mechanistic investigations have revealed a contribution of forebrain and basal ganglia circuits and sex steroid hormones to the control of vocal consistency. Across behavioral, developmental and mechanistic studies, a convergent theme regarding the importance of vocal practice in juvenile and adult songbirds emerges, providing a basis for linking these levels of analysis. By understanding vocal consistency at these levels, we gain an appreciation for the various dimensions of song control and plasticity and argue that genes regulating the function of basal ganglia circuits and sex steroid hormones could be sculpted by sexual selection. PMID:22189763

  1. A simple way to improve path consistency processing in interval algebra networks

    SciTech Connect

    Bessiere, C.

    1996-12-31

    Reasoning about qualitative temporal information is essential in many artificial intelligence problems. In particular, many tasks can be solved using the interval-based temporal algebra introduced by Allen (A1183). In this framework, one of the main tasks is to compute the transitive closure of a network of relations between intervals (also called path consistency in a CSP-like terminology). Almost all previous path consistency algorithms proposed in the temporal reasoning literature were based on the constraint reasoning algorithms PC-1 and PC-2 (Mac77). In this paper, we first show that the most efficient of these algorithms is the one which stays the closest to PC-2. Afterwards, we propose a new algorithm, using the idea {open_quotes}one support is sufficient{close_quotes} (as AC-3 (Mac77) does for arc consistency in constraint networks). Actually, to apply this idea, we simply changed the way composition-intersection of relations was achieved during the path consistency process in previous algorithms.

  2. A beam hardening correction method based on HL consistency

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Tang, Shaojie; Yu, Hengyong

    2006-08-01

    XCT with polychromatic tube spectrum causes artifact called beam hardening effect. The current correction in CT device is carried by apriori polynomial from water phantom experiment. This paper proposes a new beam hardening correction algorithm that the correction polynomial depends on the relativity of projection data in angles, which obeys Helgasson-Ludwig Consistency (HL Consistency). Firstly, a bi-polynomial is constructed to characterize the beam hardening effect based on the physical model of medical x-ray imaging. In this bi-polynomial, a factor r(γ,β) represents the ratio of the attenuation contributions caused by high density mass (bone, etc.) to low density mass (muscle, vessel, blood, soft tissue, fat, etc.) respectively in the projection angle β and fan angle γ. Secondly, let r(γ,β)=0, the bi-polynomial is degraded as a sole-polynomial. The coefficient of this polynomial can be calculated based on HL Consistency. Then, the primary correction is reached, which is also more efficient in theoretical than the correction method in current CT devices. Thirdly, based on the result of a normal CT reconstruction from the corrected projection data, r(γ,β) can be estimated. Fourthly, the coefficient of bi-polynomial can also be calculated based HL Consistency and the final correction are achieved. Experiments of circular cone beam CT indicate this method an excellent property. Correcting beam hardening effect based on HL Consistency, not only achieving a self-adaptive and more precise correction, but also getting rid of regular inconvenient water phantom experiments, will renovate the correction technique of current CT devices.

  3. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases. PMID:27510446

  4. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  5. Consistency and derangements in brane tilings

    NASA Astrophysics Data System (ADS)

    Hanany, Amihay; Jejjala, Vishnu; Ramgoolam, Sanjaye; Seong, Rak-Kyeong

    2016-09-01

    Brane tilings describe Lagrangians (vector multiplets, chiral multiplets, and the superpotential) of four-dimensional { N }=1 supersymmetric gauge theories. These theories, written in terms of a bipartite graph on a torus, correspond to worldvolume theories on N D3-branes probing a toric Calabi–Yau threefold singularity. A pair of permutations compactly encapsulates the data necessary to specify a brane tiling. We show that geometric consistency for brane tilings, which ensures that the corresponding quantum field theories are well behaved, imposes constraints on the pair of permutations, restricting certain products constructed from the pair to have no one-cycles. Permutations without one-cycles are known as derangements. We illustrate this formulation of consistency with known brane tilings. Counting formulas for consistent brane tilings with an arbitrary number of chiral bifundamental fields are written down in terms of delta functions over symmetric groups.

  6. Quantifying the Consistency of Scientific Databases

    PubMed Central

    Šubelj, Lovro; Bajec, Marko; Mileva Boshkoska, Biljana; Kastrin, Andrej; Levnajić, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies. PMID:25984946

  7. Anticholinergic substances: A single consistent conformation

    PubMed Central

    Pauling, Peter; Datta, Narayandas

    1980-01-01

    An interactive computer-graphics analysis of 24 antagonists of acetylcholine at peripheral autonomic post-ganglionic (muscarinic) nervous junctions and at similar junctions in the central nervous system, the crystal structures of which are known, has led to the determination of a single, consistent, energetically favorable conformation for all 24 substances, although their observed crystal structure conformations vary widely. The absolute configuration and the single, consistent (ideal) conformation of the chemical groups required for maximum anticholinergic activity are described quantitatively. Images PMID:16592775

  8. Accuracy and consistency of modern elastomeric pumps.

    PubMed

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  9. Dynamically consistent Jacobian inverse for mobile manipulators

    NASA Astrophysics Data System (ADS)

    Ratajczak, Joanna; Tchoń, Krzysztof

    2016-06-01

    By analogy to the definition of the dynamically consistent Jacobian inverse for robotic manipulators, we have designed a dynamically consistent Jacobian inverse for mobile manipulators built of a non-holonomic mobile platform and a holonomic on-board manipulator. The endogenous configuration space approach has been exploited as a source of conceptual guidelines. The new inverse guarantees a decoupling of the motion in the operational space from the forces exerted in the endogenous configuration space and annihilated by the dual Jacobian inverse. A performance study of the new Jacobian inverse as a tool for motion planning is presented.

  10. Dualising consistent IIA/IIB truncations

    NASA Astrophysics Data System (ADS)

    Malek, Emanuel; Samtleben, Henning

    2015-12-01

    We use exceptional field theory to establish a duality between certain consistent 7-dimensional truncations with maximal SUSY from IIA to IIB. We use this technique to obtain new consistent truncations of IIB on S 3 and H p,q and work out the explicit reduction formulas in the internal sector. We also present uplifts for other gaugings of 7-d maximal SUGRA, including theories with a trombone gauging. Some of the latter can only be obtained by a non-geometric compactification.

  11. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  12. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    PubMed

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  13. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  14. Optimisation algorithms for microarray biclustering.

    PubMed

    Perrin, Dimitri; Duhamel, Christophe

    2013-01-01

    In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756

  15. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  16. Mental Tectonics - Rendering Consistent μMaps

    NASA Astrophysics Data System (ADS)

    Schmid, Falko

    The visualization of spatial information for wayfinding assistance requires a substantial amount of display area. Depending on the particular route, even large screens can be insufficient to visualize all information at once and in a scale such that users can understand the specific course of the route and its spatial context. Personalized wayfinding maps, such as μMaps are a possible solution for small displays: they explicitly consider the prior knowledge of a user with the environment and tailor maps toward it. The resulting schematic maps require substantially less space due to the knowledge based visual information reduction. In this paper we extend and improve the underlying algorithms of μMaps to enable efficient handling of fragmented user profiles as well as the mapping of fragmented maps. Furthermore we introduce the concept of mental tectonics, a process that harmonizes mental conceptual spatial representations with entities of a geographic frame of reference.

  17. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  18. Local, smooth, and consistent Jacobi set simplification

    SciTech Connect

    Bhatia, Harsh; Wang, Bei; Norgard, Gregory; Pascucci, Valerio; Bremer, Peer -Timo

    2014-10-31

    The relation between two Morse functions defined on a smooth, compact, and orientable 2-manifold can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the two functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces, have shown to be useful in various applications. In practice, unfortunately, functions often contain noise and discretization artifacts, causing their Jacobi set to become unmanageably large and complex. Although there exist techniques to simplify Jacobi sets, they are unsuitable for most applications as they lack fine-grained control over the process, and heavily restrict the type of simplifications possible. In this paper, we introduce a new framework that generalizes critical point cancellations in scalar functions to Jacobi set in two dimensions. We present a new interpretation of Jacobi set simplification based on the perspective of domain segmentation. Generalizing the cancellation of critical points from scalar functions to Jacobi sets, we focus on simplifications that can be realized by smooth approximations of the corresponding functions, and show how these cancellations imply simultaneous simplification of contiguous subsets of the Jacobi set. Using these extended cancellations as atomic operations, we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications to some user-defined metric. We show that for simply connected domains, our algorithm reduces a given Jacobi set to its minimal configuration, that is, one with no birth–death points (a birth–death point is a specific type of singularity within the Jacobi set where the level sets of the two functions and the Jacobi set have a common normal direction).

  19. Local, smooth, and consistent Jacobi set simplification

    DOE PAGESBeta

    Bhatia, Harsh; Wang, Bei; Norgard, Gregory; Pascucci, Valerio; Bremer, Peer -Timo

    2014-10-31

    The relation between two Morse functions defined on a smooth, compact, and orientable 2-manifold can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the two functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces, have shown to be useful in various applications. In practice, unfortunately, functions often contain noise and discretization artifacts, causing their Jacobi set to become unmanageably large and complex. Although there exist techniques to simplify Jacobi sets, they are unsuitable for most applications as they lackmore » fine-grained control over the process, and heavily restrict the type of simplifications possible. In this paper, we introduce a new framework that generalizes critical point cancellations in scalar functions to Jacobi set in two dimensions. We present a new interpretation of Jacobi set simplification based on the perspective of domain segmentation. Generalizing the cancellation of critical points from scalar functions to Jacobi sets, we focus on simplifications that can be realized by smooth approximations of the corresponding functions, and show how these cancellations imply simultaneous simplification of contiguous subsets of the Jacobi set. Using these extended cancellations as atomic operations, we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications to some user-defined metric. We show that for simply connected domains, our algorithm reduces a given Jacobi set to its minimal configuration, that is, one with no birth–death points (a birth–death point is a specific type of singularity within the Jacobi set where the level sets of the two functions and the Jacobi set have a common normal direction).« less

  20. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  1. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    PubMed Central

    Huan-ling, Tang; Zhi-yong, An

    2014-01-01

    Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850

  2. Consistent sets of spectrophotometric chlorophyll equations for acetone, methanol and ethanol solvents.

    PubMed

    Ritchie, Raymond J

    2006-07-01

    A set of equations for determining chlorophyll a (Chl a) and accessory chlorophylls b, c2, c1 + c2 and the special case of Acaryochloris marina, which uses Chl d as its primary photosynthetic pigment and also has Chl a, have been developed for 90% acetone, methanol and ethanol solvents. These equations for different solvents give chlorophyll assays that are consistent with each other. No algorithms for Chl c compounds (c2, c1 + c2) in the presence of Chl a have previously been published for methanol or ethanol. The limits of detection (and inherent error, +/- 95% confidence limit), for chlorophylls in all organisms tested, was generally less than 0.1 microg/ml. The Chl a and b algorithms for green algae and land plants have very small inherent errors (< 0.01 microg/ml). Chl a and d algorithms for Acaryochloris marina are consistent with each other, giving estimates of Chl d/a ratios which are consistent with previously published estimates using HPLC and a rarely used algorithm originally published for diethyl ether in 1955. The statistical error structure of chlorophyll algorithms is discussed. The relative error of measurements of chlorophylls increases hyperbolically in diluted chlorophyll extracts because the inherent errors of the chlorophyll algorithms are constants independent of the magnitude of absorbance readings. For safety reasons, efficient extraction of chlorophylls and the convenience of being able to use polystyrene cuvettes, the algorithms for ethanol are recommended for routine assays of chlorophylls. The methanol algorithms would be convenient for assays associated with HPLC work. PMID:16763878

  3. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  4. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  5. Truss optimization on shape and sizing with frequency constraints based on orthogonal multi-gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Khatibinia, Mohsen; Sadegh Naseralavi, Seyed

    2014-12-01

    Structural optimization on shape and sizing with frequency constraints is well-known as a highly nonlinear dynamic optimization problem with several local optimum solutions. Hence, efficient optimization algorithms should be utilized to solve this problem. In this study, orthogonal multi-gravitational search algorithm (OMGSA) as a meta-heuristic algorithm is introduced to solve truss optimization on shape and sizing with frequency constraints. The OMGSA is a hybrid approach based on a combination of multi-gravitational search algorithm (multi-GSA) and an orthogonal crossover (OC). In multi-GSA, the population is split into several sub-populations. Then, each sub-population is independently evaluated by an improved gravitational search algorithm (IGSA). Furthermore, the OC is used in the proposed OMGSA in order to find and exploit the global solution in the search space. The capability of OMGSA is demonstrated through six benchmark examples. Numerical results show that the proposed OMGSA outperform the other optimization techniques.

  6. Multi-objective Job Shop Rescheduling with Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Xinchang; Gen, Mitsuo

    In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).

  7. Optimal classification of standoff bioaerosol measurements using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar

    2011-05-01

    Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.

  8. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    PubMed

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  9. GPS-Free Localization Algorithm for Wireless Sensor Networks

    PubMed Central

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694

  10. GPS-free localization algorithm for wireless sensor networks.

    PubMed

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694

  11. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  12. Self-Consistent Magnetosphere-Ionosphere Coupling

    NASA Technical Reports Server (NTRS)

    Six, N. Frank (Technical Monitor); Khazanov, G. V.; Newman, T. S.; Liemohn, M. W.; Fok, M. C.; Spiro, R. W.

    2002-01-01

    A self-consistent ring current (RC) model has been developed that couples electron and ion magnetospheric dynamics with the calculation of the electric field. Two new features were taken into account in order to close the self-consistent magnetosphere-ionosphere coupling loop. First, in addition to the RC ions, we have solved an electron kinetic equation in our model. Second, using the relation of Galand and Richmond, we have calculated the height integrated ionospheric conductances as a function of the precipitated high energy magnetospheric electrons and ions that are produced by our model. To validate the results of our model we simulate the magnetic storm of May 2, 1986, a storm that has been comprehensively studied by Fok et al., and have compared our results with different theoretical approaches. The self-consistent inclusion of the hot electrons and their effect on the conductance results in deeper penetration of the magnetospheric electric field. In addition, a slight westward rotation of the potential pattern (compared to previous self-consistent results) is evident in the inner magnetosphere. These effects change the hot plasma distribution, especially by allowing increased access of plasma sheet ions and electrons to low L shells.

  13. Developing consistent time series landsat data products

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Landsat series satellite has provided earth observation data record continuously since early 1970s. There are increasing demands on having a consistent time series of Landsat data products. In this presentation, I will summarize the work supported by the USGS Landsat Science Team project from 20...

  14. 36 CFR 241.22 - Consistency determinations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... FISH AND WILDLIFE Conservation of Fish, Wildlife, and Their Habitat, Chugach National Forest, Alaska... conservation of fish, wildlife, and their habitat. A use or activity may be determined to be consistent if it will not materially interfere with or detract from the conservation of fish, wildlife and their...

  15. 36 CFR 241.22 - Consistency determinations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... FISH AND WILDLIFE Conservation of Fish, Wildlife, and Their Habitat, Chugach National Forest, Alaska... conservation of fish, wildlife, and their habitat. A use or activity may be determined to be consistent if it will not materially interfere with or detract from the conservation of fish, wildlife and their...

  16. 36 CFR 241.22 - Consistency determinations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FISH AND WILDLIFE Conservation of Fish, Wildlife, and Their Habitat, Chugach National Forest, Alaska... conservation of fish, wildlife, and their habitat. A use or activity may be determined to be consistent if it will not materially interfere with or detract from the conservation of fish, wildlife and their...

  17. Effecting Consistency across Curriculum: A Case Study

    ERIC Educational Resources Information Center

    Devasagayam, P. Raj; Mahaffey, Thomas R.

    2008-01-01

    Continuous quality improvement is the clarion call across all business schools which is driving the emphasis on assessing the attainment of learning outcomes. An issue that deems special attention in assurance of learning outcomes is related to consistency across courses and, more specifically, across multiple sections of the same course taught by…

  18. Grading for Speed, Consistency, and Accuracy.

    ERIC Educational Resources Information Center

    Kryder, LeeAnne G.

    2003-01-01

    Explains the rubrics the author has developed to assure some degree of consistency in grading among instructors and teaching assistants in various sections of the same writing course. Finds these rubrics particularly useful for evaluating individual student performance in group projects. (SG)

  19. 24 CFR 91.510 - Consistency determinations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Consistency determinations. 91.510 Section 91.510 Housing and Urban Development Office of the Secretary, Department of Housing and Urban... HOPWA grant is a city that is the most populous unit of general local government in an EMSA, it...

  20. Energy confinement and profile consistency in TFTR

    SciTech Connect

    Goldston, R.J.; Arunasalan, V.; Bell, M.G.; Bitter, M.; Blanchard, W.R.; Bretz, N.L.; Budny, R.; bush, C.E.; Callen, J.D.; Cohen, S.A.

    1987-04-01

    A new regime of enhanced energy confinement has been observed on TFTR with neutral beam injection at low plasma current. It is characterized by extremely peaked electron density profiles and broad electron temperature profiles. The electron temperature profile shapes violate the concept of profile consistency in which T/sub e/(O)//sub v/ is assumed to be a tightly constrained function of q/sub a/, but they are in good agreement with a form of profile consistency based on examining the temperature profile shape outside the plasma core. The enhanced confinement regime is only obtained with a highly degassed limiter; in discharges with gas-filled limiters convective losses are calculated to dominate the edge electron power balance. Consistent with the constraint of profile consistency, global confinement is degraded in these cases. The best heating results in the enhanced confinement regime are obtained with nearly balanced co- and counter-injection. Much of the difference between balanced and co-only injection can be explained on the basis of classically predicted effects associated with plasma rotation.

  1. Consistency criteria for generalized Cuddeford systems

    NASA Astrophysics Data System (ADS)

    Ciotti, Luca; Morganti, Lucia

    2010-01-01

    General criteria to check the positivity of the distribution function (phase-space consistency) of stellar systems of assigned density and anisotropy profile are useful starting points in Jeans-based modelling. Here, we substantially extend previous results, and present the inversion formula and the analytical necessary and sufficient conditions for phase-space consistency of the family of multicomponent Cuddeford spherical systems: the distribution function of each density component of these systems is defined as the sum of an arbitrary number of Cuddeford distribution functions with arbitrary values of the anisotropy radius, but identical angular momentum exponent. The radial trend of anisotropy that can be realized by these models is therefore very general. As a surprising byproduct of our study, we found that the `central cusp-anisotropy theorem' (a necessary condition for consistency relating the values of the central density slope and of the anisotropy parameter) holds not only at the centre but also at all radii in consistent multicomponent generalized Cuddeford systems. This last result suggests that the so-called mass-anisotropy degeneracy could be less severe than what is sometimes feared.

  2. Consistency of Toddler Engagement across Two Settings

    ERIC Educational Resources Information Center

    Aguiar, Cecilia; McWilliam, R. A.

    2013-01-01

    This study documented the consistency of child engagement across two settings, toddler child care classrooms and mother-child dyadic play. One hundred twelve children, aged 14-36 months (M = 25.17, SD = 6.06), randomly selected from 30 toddler child care classrooms from the district of Porto, Portugal, participated. Levels of engagement were…

  3. Consistency of Students' Pace in Online Learning

    ERIC Educational Resources Information Center

    Hershkovitz, Arnon; Nachmias, Rafi

    2009-01-01

    The purpose of this study is to investigate the consistency of students' behavior regarding their pace of actions over sessions within an online course. Pace in a session is defined as the number of logged actions divided by session length (in minutes). Log files of 6,112 students were collected, and datasets were constructed for examining pace…

  4. Environmental Decision Support with Consistent Metrics

    EPA Science Inventory

    One of the most effective ways to pursue environmental progress is through the use of consistent metrics within a decision making framework. The US Environmental Protection Agency’s Sustainable Technology Division has developed TRACI, the Tool for the Reduction and Assessment of...

  5. Consistent Visual Analyses of Intrasubject Data

    ERIC Educational Resources Information Center

    Kahng, SungWoo; Chung, Kyong-Mee; Gutshall, Katharine; Pitts, Steven C.; Kao, Joyce; Girolami, Kelli

    2010-01-01

    Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We…

  6. Taking Another Look: Sensuous, Consistent Form.

    ERIC Educational Resources Information Center

    Townley, Mary Ross

    1983-01-01

    There is a natural progression from making single objects to creating sculpture. By modeling the forms of objects like funnels and light bulbs, students become aware of the quality of curves and the edges of angles. Sculptural form in architecture can be understood as consistency in the forms. (CS)

  7. Comparative Study of Two Automatic Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, D.; Bethel, J.; Crawford, M.

    2013-10-01

    The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is

  8. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  9. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  10. Bayesian fusion of algorithms for the robust estimation of respiratory rate from the photoplethysmogram.

    PubMed

    Zhu, Tingting; Pimentel, Marco A F; Clifford, Gari D; Clifton, David A

    2015-08-01

    Respiratory rate (RR) is a key vital sign that is monitored to assess the health of patients. With the increase of the availability of wearable devices, it is important that RR is extracted in a robust and noninvasive manner from the photoplethysmogram (PPG) acquired from pulse oximeters and similar devices. However, existing methods of noninvasive RR estimation suffer from a lack of robustness, resulting in the fact that they are not used in clinical practice. We propose a Bayesian approach to fusing the outputs of many RR estimation algorithms to improve the overall robustness of the resulting estimates. Our method estimates the accuracy of each algorithm and jointly infers the fused RR estimate in an unsupervised manner, with aim of producing a fused estimate that is more accurate than any of the algorithms taken individually. This approach is novel in the literature, where the latter has so far concentrated on attempting to produce single algorithms for RR estimation, without resulting in systems that have penetrated into clinical practice. A publicly-available dataset, Capnobase, was used to validate the performance of our proposed model. Our proposed methodology was compared to the best-performing individual algorithm from the literature, as well as to the results of using common fusing methodologies such as averaging, median, and maximum likelihood (ML). Our proposed methodology resulted in a mean-absolute-error (MAE) of 1.98 breaths per minute (bpm), outperformed other fusing strategies (mean fusion: 2.95 bpm; median fusion: 2.33 bpm; ML: 2.30 bpm). It also outperformed the best single algorithm (2.39 bpm) and the benchmark algorithm proposed for use with Capnobase (2.22 bpm). We conclude that the proposed fusion methodology can be used to combine RR estimates from multiple sources derived from the PPG, to infer a reliable and robust estimation of the respiratory rate in an unsupervised manner. PMID:26737693

  11. Optimizing Algorithm Choice for Metaproteomics: Comparing X!Tandem and Proteome Discoverer for Soil Proteomes

    NASA Astrophysics Data System (ADS)

    Diaz, K. S.; Kim, E. H.; Jones, R. M.; de Leon, K. C.; Woodcroft, B. J.; Tyson, G. W.; Rich, V. I.

    2014-12-01

    The growing field of metaproteomics links microbial communities to their expressed functions by using mass spectrometry methods to characterize community proteins. Comparison of mass spectrometry protein search algorithms and their biases is crucial for maximizing the quality and amount of protein identifications in mass spectral data. Available algorithms employ different approaches when mapping mass spectra to peptides against a database. We compared mass spectra from four microbial proteomes derived from high-organic content soils searched with two search algorithms: 1) Sequest HT as packaged within Proteome Discoverer (v.1.4) and 2) X!Tandem as packaged in TransProteomicPipeline (v.4.7.1). Searches used matched metagenomes, and results were filtered to allow identification of high probability proteins. There was little overlap in proteins identified by both algorithms, on average just ~24% of the total. However, when adjusted for spectral abundance, the overlap improved to ~70%. Proteome Discoverer generally outperformed X!Tandem, identifying an average of 12.5% more proteins than X!Tandem, with X!Tandem identifying more proteins only in the first two proteomes. For spectrally-adjusted results, the algorithms were similar, with X!Tandem marginally outperforming Proteome Discoverer by an average of ~4%. We then assessed differences in heat shock proteins (HSP) identification by the two algorithms by BLASTing identified proteins against the Heat Shock Protein Information Resource, because HSP hits typically account for the majority signal in proteomes, due to extraction protocols. Total HSP identifications for each of the 4 proteomes were approximately ~15%, ~11%, ~17%, and ~19%, with ~14% for total HSPs with redundancies removed. Of the ~15% average of proteins from the 4 proteomes identified as HSPs, ~10% of proteins and spectra were identified by both algorithms. On average, Proteome Discoverer identified ~9% more HSPs than X!Tandem.

  12. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  13. Precession-nutation procedures consistent with IAU 2006 resolutions

    NASA Astrophysics Data System (ADS)

    Wallace, P. T.; Capitaine, N.

    2006-12-01

    Context: .The 2006 IAU General Assembly has adopted the P03 model of Capitaine et al. (2003a) recommended by the WG on precession and the ecliptic (Hilton et al. 2006) to replace the IAU 2000 model, which comprised the Lieske et al. (1977) model with adjusted rates. Practical implementations of this new "IAU 2006" model are therefore required, involving choices of procedures and algorithms. Aims: .The purpose of this paper is to recommend IAU 2006 based precession-nutation computing procedures, suitable for different classes of application and achieving high standards of consistency. Methods: .We discuss IAU 2006 based procedures and algorithms for generating the rotation matrices that transform celestial to terrestrial coordinates, taking into account frame bias (B), P03 precession (P), P03-adjusted IAU 2000A nutation (N) and Earth rotation. The NPB portion can refer either to the equinox or to the celestial intermediate origin (CIO), requiring either the Greenwich sidereal time (GST) or the Earth rotation angle (ERA) as the measure of Earth rotation. Where GST is used, it is derived from ERA and the equation of the origins (EO) rather than through an explicit formula as in the past, and the EO itself is derived from the CIO locator. Results: .We provide precession-nutation procedures for two different classes of full-accuracy application, namely (i) the construction of algorithm collections such as the Standards Of Fundamental Astronomy (SOFA) library and (ii) IERS Conventions, and in addition some concise procedures for applications where the highest accuracy is not a requirement. The appendix contains a fully worked numerical example, to aid implementors and to illustrate the consistency of the two full-accuracy procedures which, for the test date, agree to better than 1 μas. Conclusions: .The paper recommends, for case (i), procedures based on angles to represent the PB and N components and, for case (ii), procedures based on series for the CIP X,Y. The two

  14. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  15. The ideas behind self-consistent expansion

    NASA Astrophysics Data System (ADS)

    Schwartz, Moshe; Katzav, Eytan

    2008-04-01

    In recent years we have witnessed a growing interest in various non-equilibrium systems described in terms of stochastic nonlinear field theories. In some of those systems, like KPZ and related models, the interesting behavior is in the strong coupling regime, which is inaccessible by traditional perturbative treatments such as dynamical renormalization group (DRG). A useful tool in the study of such systems is the self-consistent expansion (SCE), which might be said to generate its own 'small parameter'. The self-consistent expansion (SCE) has the advantage that its structure is just that of a regular expansion, the only difference is that the simple system around which the expansion is performed is adjustable. The purpose of this paper is to present the method in a simple and understandable way that hopefully will make it accessible to a wider public working on non-equilibrium statistical physics.

  16. Human Pose Estimation Using Consistent Max Covering.

    PubMed

    Jiang, Hao

    2011-09-01

    A novel consistent max-covering method is proposed for human pose estimation. We focus on problems in which a rough foreground estimation is available. Pose estimation is formulated as a jigsaw puzzle problem in which the body part tiles maximally cover the foreground region, match local image features, and satisfy body plan and color constraints. This method explicitly imposes a global shape constraint on the body part assembly. It anchors multiple body parts simultaneously and introduces hyperedges in the part relation graph, which is essential for detecting complex poses. Using multiple cues in pose estimation, our method is resistant to cluttered foregrounds. We propose an efficient linear method to solve the consistent max-covering problem. A two-stage relaxation finds the solution in polynomial time. Our experiments on a variety of images and videos show that the proposed method is more robust than previous locally constrained methods. PMID:21576747

  17. Consistent Numerical Expressions for Precession Formulae.

    NASA Astrophysics Data System (ADS)

    Soma, M.

    The precession formulae by Lieske et al. (1977) have been used since 1984 for calculating apparent positions and reducing astrometric observations of celestial objects. These formulae are based on the IAU (1976) Astronomical Constants, some of which deviate from their recently determined values. They are also derived using the secular variations of the ecliptic pole from Newcomb's theory, which is not consistent with the recent planetary theories. Accordingly Simon et al. (1994) developed new precession formulae using the recently determined astronomical constants and also being based on the new planetary theory VSOP87. There are two differing definitions of the ecliptic: ecliptic in the inertial sense and ecliptic in the rotating sense (Standish 1981). The ecliptic given by the VSOP87 theory is that in the inertial sense, but the value for obliquity Simon et al. used is the obliquity in the rotating sense. Therefore their precession formulae has inconsistency. This paper gives corrections for consistent precession formulae.

  18. Consistent Pauli reduction on group manifolds

    NASA Astrophysics Data System (ADS)

    Baguet, A.; Pope, C. N.; Samtleben, H.

    2016-01-01

    We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSsbnd NS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G × G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk-Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3 ×S3 and on similar product spaces. The construction is another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.

  19. Consistency relation for cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Jain, Rajeev Kumar; Sloth, Martin S.

    2012-12-01

    If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the cosmic microwave background anisotropies and large scale structure. Within an archetypical model of inflationary magnetogenesis, we show that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields.

  20. Self-Consistent Scattering and Transport Calculations

    NASA Astrophysics Data System (ADS)

    Hansen, S. B.; Grabowski, P. E.

    2015-11-01

    An average-atom model with ion correlations provides a compact and complete description of atomic-scale physics in dense, finite-temperature plasmas. The self-consistent ionic and electronic distributions from the model enable calculation of x-ray scattering signals and conductivities for material across a wide range of temperatures and densities. We propose a definition for the bound electronic states that ensures smooth behavior of these measurable properties under pressure ionization and compare the predictions of this model with those of less consistent models for Be, C, Al, and Fe. SNL is a multi-program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. DoE NNSA under contract DE-AC04-94AL85000. This work was supported by DoE OFES Early Career grant FWP-14-017426.

  1. Self-Consistent Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Rohr, Daniel; Hellgren, Maria; Gross, E. K. U.

    2012-02-01

    We report self-consistent Random Phase Approximation (RPA) calculations within the Density Functional Theory. The calculations are performed by the direct minimization scheme for the optimized effective potential method developed by Yang et al. [1]. We show results for the dissociation curve of H2^+, H2 and LiH with the RPA, where the exchange correlation kernel has been set to zero. For H2^+ and H2 we also show results for RPAX, where the exact exchange kernel has been included. The RPA, in general, over-correlates. At intermediate distances a maximum is obtained that lies above the exact energy. This is known from non-self-consistent calculations and is still present in the self-consistent results. The RPAX energies are higher than the RPA energies. At equilibrium distance they accurately reproduce the exact total energy. In the dissociation limit they improve upon RPA, but are still too low. For H2^+ the RPAX correlation energy is zero. Consequently, RPAX gives the exact dissociation curve. We also present the local potentials. They indicate that a peak at the bond midpoint builds up with increasing bond distance. This is expected for the exact KS potential.[4pt] [1] W. Yang, and Q. Wu, Phys. Rev. Lett., 89, 143002 (2002)

  2. Towards a consistent modeling framework across scales

    NASA Astrophysics Data System (ADS)

    Jagers, B.

    2013-12-01

    The morphodynamic evolution of river-delta-coastal systems may be studied in detail to predict local, short-term changes or at a more aggregated level to indicate the net large scale, long-term effect. The whole spectrum of spatial and temporal scales needs to be considered for environmental impact studies. Usually this implies setting up a number of different models for different scales. Since the various models often use codes that have been independently developed by different researchers and include different formulations, it may be difficult to arrive at a consistent set of modeling results. This is one of the reasons why Deltares has taken on an effort to develop a consistent suite of model components that can be applied over a wide range of scales. The heart of this suite is formed by a flexible mesh flow component that supports mixed 1D-2D-3D domains, a equally flexible transport component with an expandable library of water quality and ecological processes, and a library of sediment transport and morphology routines that can be linked directly to the flow component or used as part of the process library. We will present the latest developments with a focus on the status of the sediment transport and morphology component for running consistent 1D, 2D and 3D models.

  3. A permutation based simulated annealing algorithm to predict pseudoknotted RNA secondary structures.

    PubMed

    Tsang, Herbert H; Wiese, Kay C

    2015-01-01

    Pseudoknots are RNA tertiary structures which perform essential biological functions. This paper discusses SARNA-Predict-pk, a RNA pseudoknotted secondary structure prediction algorithm based on Simulated Annealing (SA). The research presented here extends previous work of SARNA-Predict and further examines the effect of the new algorithm to include prediction of RNA secondary structure with pseudoknots. An evaluation of the performance of SARNA-Predict-pk in terms of prediction accuracy is made via comparison with several state-of-the-art prediction algorithms using 20 individual known structures from seven RNA classes. We measured the sensitivity and specificity of nine prediction algorithms. Three of these are dynamic programming algorithms: Pseudoknot (pknotsRE), NUPACK, and pknotsRG-mfe. One is using the statistical clustering approach: Sfold and the other five are heuristic algorithms: SARNA-Predict-pk, ILM, STAR, IPknot and HotKnots algorithms. The results presented in this paper demonstrate that SARNA-Predict-pk can out-perform other state-of-the-art algorithms in terms of prediction accuracy. This supports the use of the proposed method on pseudoknotted RNA secondary structure prediction of other known structures. PMID:26558299

  4. On the use of harmony search algorithm in the training of wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  5. C-element: a new clustering algorithm to find high quality functional modules in PPI networks.

    PubMed

    Ghasemi, Mahdieh; Rahgozar, Maseud; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-01-01

    Graph clustering algorithms are widely used in the analysis of biological networks. Extracting functional modules in protein-protein interaction (PPI) networks is one such use. Most clustering algorithms whose focuses are on finding functional modules try either to find a clique like sub networks or to grow clusters starting from vertices with high degrees as seeds. These algorithms do not make any difference between a biological network and any other networks. In the current research, we present a new procedure to find functional modules in PPI networks. Our main idea is to model a biological concept and to use this concept for finding good functional modules in PPI networks. In order to evaluate the quality of the obtained clusters, we compared the results of our algorithm with those of some other widely used clustering algorithms on three high throughput PPI networks from Sacchromyces Cerevisiae, Homo sapiens and Caenorhabditis elegans as well as on some tissue specific networks. Gene Ontology (GO) analyses were used to compare the results of different algorithms. Each algorithm's result was then compared with GO-term derived functional modules. We also analyzed the effect of using tissue specific networks on the quality of the obtained clusters. The experimental results indicate that the new algorithm outperforms most of the others, and this improvement is more significant when tissue specific networks are used. PMID:24039752

  6. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    SciTech Connect

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine

    2015-10-15

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.

  7. Recent processing string and fusion algorithm improvements for automated sea mine classification in shallow water

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernandez, Manuel F.; Dobeck, Gerald J.

    2003-09-01

    A novel sea mine computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The overall CAD/CAC processing string consists of pre-processing, adaptive clutter filtering (ACF), normalization, detection, feature extraction, feature orthogonalization, optimal subset feature selection, classification and fusion processing blocks. The range-dimension ACF is matched both to average highlight and shadow information, while also adaptively suppressing background clutter. For each detected object, features are extracted and processed through an orthogonalization transformation, enabling an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule, in the orthogonal feature space domain. The classified objects of 4 distinct processing strings are fused using the classification confidence values as features and logic-based, "M-out-of-N", or LLRT-based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new shallow water high-resolution sonar imagery data. The processing string detection and classification parameters were tuned and the string classification performance was optimized, by appropriately selecting a subset of the original feature set. A significant improvement was made to the CAD/CAC processing string by utilizing a repeated application of the subset feature selection / LLRT classification blocks. It was shown that LLRT-based fusion algorithms outperform the logic based and the "M-out-of-N" ones. The LLRT-based fusion of the CAD/CAC processing strings resulted in up to a nine-fold false alarm rate reduction, compared to the best single CAD/CAC processing string results, while maintaining a constant correct mine classification probability.

  8. An Inexact Newton–Krylov Algorithm for Constrained Diffeomorphic Image Registration*

    PubMed Central

    Mang, Andreas; Biros, George

    2016-01-01

    We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1- or H2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton–Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton–Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation

  9. Evaluating and comparing algorithms for respiratory motion prediction.

    PubMed

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-06-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm-which is one of the algorithms currently used in the CyberKnife-is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient's respiratory

  10. Consistency of color representation in smart phones.

    PubMed

    Dain, Stephen J; Kwan, Benjamin; Wong, Leslie

    2016-03-01

    One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in

  11. GOES-R Algorithm Working Group (AWG)

    NASA Astrophysics Data System (ADS)

    Daniels, Jaime; Goldberg, Mitch; Wolf, Walter; Zhou, Lihang; Lowe, Kenneth

    2009-08-01

    For the next-generation of GOES-R instruments to meet stated performance requirements, state-of-the-art algorithms will be needed to convert raw instrument data to calibrated radiances and derived geophysical parameters (atmosphere, land, ocean, and space weather). The GOES-R Program Office (GPO) assigned the NOAA/NESDIS Center for Satellite Research and Applications (STAR) the responsibility for technical leadership and management of GOES-R algorithm development and calibration/validation. STAR responded with the creation of the GOES-R Algorithm Working Group (AWG) to manage and coordinate development and calibration/validation activities for GOES-R proxy data and geophysical product algorithms. The AWG consists of 15 application teams that bring expertise in product algorithms that span atmospheric, land, oceanic, and space weather disciplines. Each AWG teams will develop new scientific Level- 2 algorithms for GOES-R and will also leverage science developments from other communities (other government agencies, universities and industry), and heritage approaches from current operational GOES and POES product systems. All algorithms will be demonstrated and validated in a scalable operational demonstration environment. All software developed by the AWG will adhere to new standards established within NOAA/NESDIS. The AWG Algorithm Integration Team (AIT) has the responsibility for establishing the system framework, integrating the product software from each team into this framework, enforcing the established software development standards, and preparing system deliveries. The AWG will deliver an Algorithm Theoretical Basis Document (ATBD) for each GOES-R geophysical product as well as Delivered Algorithm Packages (DAPs) to the GPO.

  12. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  13. Genotyping NAT2 with only two SNPs (rs1041983 and rs1801280) outperforms the tagging SNP rs1495741 and is equivalent to the conventional 7-SNP NAT2 genotype.

    PubMed

    Selinski, Silvia; Blaszkewicz, Meinolf; Lehmann, Marie-Louise; Ovsiannikov, Daniel; Moormann, Oliver; Guballa, Christoph; Kress, Alexander; Truss, Michael C; Gerullis, Holger; Otto, Thomas; Barski, Dimitri; Niegisch, Günter; Albers, Peter; Frees, Sebastian; Brenner, Walburgis; Thüroff, Joachim W; Angeli-Greaves, Miriam; Seidel, Thilo; Roth, Gerhard; Dietrich, Holger; Ebbinghaus, Rainer; Prager, Hans M; Bolt, Hermann M; Falkenstein, Michael; Zimmermann, Anna; Klein, Torsten; Reckwitz, Thomas; Roemer, Hermann C; Löhlein, Dietrich; Weistenhöfer, Wobbeke; Schöps, Wolfgang; Hassan Rizvi, Syed Adibul; Aslam, Muhammad; Bánfi, Gergely; Romics, Imre; Steffens, Michael; Ekici, Arif B; Winterpacht, Andreas; Ickstadt, Katja; Schwender, Holger; Hengstler, Jan G; Golka, Klaus

    2011-10-01

    Genotyping N-acetyltransferase 2 (NAT2) is of high relevance for individualized dosing of antituberculosis drugs and bladder cancer epidemiology. In this study we compared a recently published tagging single nucleotide polymorphism (SNP) (rs1495741) to the conventional 7-SNP genotype (G191A, C282T, T341C, C481T, G590A, A803G and G857A haplotype pairs) and systematically analysed if novel SNP combinations outperform the latter. For this purpose, we studied 3177 individuals by PCR and phenotyped 344 individuals by the caffeine test. Although the tagSNP and the 7-SNP genotype showed a high degree of correlation (R=0.933, P<0.0001) the 7-SNP genotype nevertheless outperformed the tagging SNP with respect to specificity (1.0 vs. 0.9444, P=0.0065). Considering all possible SNP combinations in a receiver operating characteristic analysis we identified a 2-SNP genotype (C282T, T341C) that outperformed the tagging SNP and was equivalent to the 7-SNP genotype. The 2-SNP genotype predicted the correct phenotype with a sensitivity of 0.8643 and a specificity of 1.0. In addition, it predicted the 7-SNP genotype with sensitivity and specificity of 0.9993 and 0.9880, respectively. The prediction of the NAT2 genotype by the 2-SNP genotype performed similar in populations of Caucasian, Venezuelan and Pakistani background. A 2-SNP genotype predicts NAT2 phenotypes with similar sensitivity and specificity as the conventional 7-SNP genotype. This procedure represents a facilitation in individualized dosing of NAT2 substrates without losing sensitivity or specificity. PMID:21750470

  14. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    PubMed

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration. PMID:26292034

  15. Temporal consistent depth map upscaling for 3DTV

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  16. Positive Stable Realisation of Fractional Electrical Circuits Consisting of n Subsystem

    NASA Astrophysics Data System (ADS)

    Markowski, Konrad Andrzej

    2015-11-01

    This paper presents a method of the determination of a positive stable realisation of the fractional continuous-time positive system consisting of n subsystems with one fractional order and with different fractional orders. For the proposed method, a digraph-based algorithm was constructed. In this paper, we have shown how we can realise the transfer matrix using electrical circuits consisting of resistances, inductances, capacitances and source voltages. The proposed method was discussed and illustrated with some numerical examples.

  17. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  18. Evaluating Temporal Consistency in Marine Biodiversity Hotspots

    PubMed Central

    Barner, Allison K.; Benkwitt, Cassandra E.; Boersma, Kate S.; Cerny-Chipman, Elizabeth B.; Ingeman, Kurt E.; Kindinger, Tye L.; Lindsley, Amy J.; Nelson, Jake; Reimer, Jessica N.; Rowe, Jennifer C.; Shen, Chenchen; Thompson, Kevin A.; Heppell, Selina S.

    2015-01-01

    With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon’s diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other

  19. Evaluating Temporal Consistency in Marine Biodiversity Hotspots.

    PubMed

    Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S

    2015-01-01

    With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other

  20. Density neutron self-consistent caliper

    SciTech Connect

    Paske, W.C.; Rodney, P.F.; Roeder, R.A.

    1988-12-20

    This patent describes a system for determining the caliber of a borehole during drilling operations in an earth formation, comprising: first means adapted to make a first measurement of a first physical characteristic of an interior property of the formation; second means adapted to make a second measurement of a second physical characteristic of an interior property of the formation. The second physical characteristic being different from the first physical characteristic; means for determining the lithology of the formation; and means to compare the first and second measurements and to initiate an interation process based at least in part upon the determined lithology, to determine a self-consistent borehole caliber.

  1. Consistent Realization of ITRS and ICRS

    NASA Astrophysics Data System (ADS)

    Seitz, M.; Steigenberger, P.; Artz, T.

    2012-12-01

    This paper deals with the consistent realization of the International Terrestrial Reference System (ITRS) and the International Celestial Reference System (ICRS). DGFI computes such a common realization for the first time by combining normal equations of the space geodetic techniques of Very Long Baseline Interferometry (VLBI), Satellite Laser Ranging (SLR), and Global Navigation Satellite Systems (GNSS). The results for the Celestial Reference Frame (CRF) are compared to a classical VLBI-only CRF solution. It turns out that the combination of EOP from the different space geodetic techniques impacts the CRF, in particular the VCS (VLBA Calibrator Survey) sources.

  2. Consistent Predictions of Future Forest Mortality

    NASA Astrophysics Data System (ADS)

    McDowell, N. G.

    2014-12-01

    We examined empirical and model based estimates of current and future forest mortality of conifers in the northern hemisphere. Consistent water potential thresholds were found that resulted in mortality of our case study species, pinon pine and one-seed juniper. Extending these results with IPCC climate scenarios suggests that most existing trees in this region (SW USA) will be dead by 2050. Further, independent estimates of future mortality for the entire coniferous biome suggest widespread mortality by 2100. The validity and assumptions and implications of these results are discussed.

  3. Using consistent subcuts for detecting stable properties

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Sabel, Laura

    1992-01-01

    We present a general protocol for detecting whether a property holds in a distributed system, where the property is a member of a subclass of stable properties we call the locally stable properties. Our protocol is based on a decentralized method for constructing a maximal subset of the local states that are mutually consistent, which in turn is based on a weakened version of vectored time stamps. The structure of our protocol lends itself to refinement, and we demonstrate its utility by deriving some specialized property-detection protocols, including two previously known protocols that are known to be effective.

  4. Consistency relations for the conformal mechanism

    SciTech Connect

    Creminelli, Paolo; Joyce, Austin; Khoury, Justin; Simonović, Marko E-mail: joyceau@sas.upenn.edu E-mail: marko.simonovic@sissa.it

    2013-04-01

    We systematically derive the consistency relations associated to the non-linearly realized symmetries of theories with spontaneously broken conformal symmetry but with a linearly-realized de Sitter subalgebra. These identities relate (N+1)-point correlation functions with a soft external Goldstone to N-point functions. These relations have direct implications for the recently proposed conformal mechanism for generating density perturbations in the early universe. We study the observational consequences, in particular a novel one-loop contribution to the four-point function, relevant for the stochastic scale-dependent bias and CMB μ-distortion.

  5. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  6. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  7. Swarm-based algorithm for phase unwrapping.

    PubMed

    da Silva Maciel, Lucas; Albertazzi, Armando G

    2014-08-20

    A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125

  8. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  9. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  10. A signal invariant wavelet function selection algorithm.

    PubMed

    Garg, Girisha

    2016-04-01

    This paper addresses the problem of mother wavelet selection for wavelet signal processing in feature extraction and pattern recognition. The problem is formulated as an optimization criterion, where a wavelet library is defined using a set of parameters to find the best mother wavelet function. For estimating the fitness function, adopted to evaluate the performance of the wavelet function, analysis of variance is used. Genetic algorithm is exploited to optimize the determination of the best mother wavelet function. For experimental evaluation, solutions for best mother wavelet selection are evaluated on various biomedical signal classification problems, where the solutions of the proposed algorithm are assessed and compared with manual hit-and-trial methods. The results show that the solutions of automated mother wavelet selection algorithm are consistent with the manual selection of wavelet functions. The algorithm is found to be invariant to the type of signals used for classification. PMID:26253283

  11. An Intelligent Model for Pairs Trading Using Genetic Algorithms

    PubMed Central

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  12. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  13. An Intelligent Model for Pairs Trading Using Genetic Algorithms.

    PubMed

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  14. Efficiency of tabu-search-based conformational search algorithms.

    PubMed

    Grebner, Christoph; Becker, Johannes; Stepanenko, Svetlana; Engels, Bernd

    2011-07-30

    Efficient conformational search or sampling approaches play an integral role in molecular modeling, leading to a strong demand for even faster and more reliable conformer search algorithms. This article compares the efficiency of a molecular dynamics method, a simulated annealing method, and the basin hopping (BH) approach (which are widely used in this field) with a previously suggested tabu-search-based approach called gradient only tabu search (GOTS). The study emphasizes the success of the GOTS procedure and, more importantly, shows that an approach which combines BH and GOTS outperforms the single methods in efficiency and speed. We also show that ring structures built by a hydrogen bond are useful as starting points for conformational search investigations of peptides and organic ligands with biological activities, especially in structures that contain multiple rings. PMID:21541959

  15. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  16. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  17. Improved robust point matching with label consistency

    NASA Astrophysics Data System (ADS)

    Bhagalia, Roshni; Miller, James V.; Roy, Arunabha

    2010-03-01

    Robust point matching (RPM) jointly estimates correspondences and non-rigid warps between unstructured point-clouds. RPM does not, however, utilize information of the topological structure or group memberships of the data it is matching. In numerous medical imaging applications, each extracted point can be assigned group membership attributes or labels based on segmentation, partitioning, or clustering operations. For example, points on the cortical surface of the brain can be grouped according to the four lobes. Estimated warps should enforce the topological structure of such point-sets, e.g. points belonging to the temporal lobe in the two point-sets should be mapped onto each other. We extend the RPM objective function to incorporate group membership labels by including a Label Entropy (LE) term. LE discourages mappings that transform points within a single group in one point-set onto points from multiple distinct groups in the other point-set. The resulting Labeled Point Matching (LPM) algorithm requires a very simple modification to the standard RPM update rules. We demonstrate the performance of LPM on coronary trees extracted from cardiac CT images. We partitioned the point sets into coronary sections without a priori anatomical context, yielding potentially disparate labelings (e.g. [1,2,3] --> [a,b,c,d]). LPM simultaneously estimated label correspondences, point correspondences, and a non-linear warp. Non-matching branches were treated wholly through the standard RPM outlier process akin to non-matching points. Results show LPM produces warps that are more physically meaningful than RPM alone. In particular, LPM mitigates unrealistic branch crossings and results in more robust non-rigid warp estimates.

  18. A new machine learning algorithm for removal of salt and pepper noise

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Adhami, Reza; Fu, Jian

    2015-07-01

    Supervised machine learning algorithm has been extensively studied and applied to different fields of image processing in past decades. This paper proposes a new machine learning algorithm, called margin setting (MS), for restoring images that are corrupted by salt and pepper impulse noise. Margin setting generates decision surface to classify the noise pixels and non-noise pixels. After the noise pixels are detected, a modified ranked order mean (ROM) filter is used to replace the corrupted pixels for images reconstruction. Margin setting algorithm is tested with grayscale and color images for different noise densities. The experimental results are compared with those of the support vector machine (SVM) and standard median filter (SMF). The results show that margin setting outperforms these methods with higher Peak Signal-to-Noise Ratio (PSNR), lower mean square error (MSE), higher image enhancement factor (IEF) and higher Structural Similarity Index (SSIM).

  19. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  20. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  1. Rain detection and removal algorithm using motion-compensated non-local mean filter

    NASA Astrophysics Data System (ADS)

    Song, B. C.; Seo, S. J.

    2015-03-01

    This paper proposed a novel rain detection and removal algorithm robust against camera motions. It is very difficult to detect and remove rain in video with camera motion. So, most previous works assume that camera is fixed. However, these methods are not useful for application. The proposed algorithm initially detects possible rain streaks by using spatial properties such as luminance and structure of rain streaks. Then, the rain streak candidates are selected based on Gaussian distribution model. Next, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to each including rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local mean (NLM) filtering using the similar neighbors. Experimental results show that the proposed method outperforms previous works in terms of objective and subjective visual quality.

  2. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  3. Consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Guo, Chonghui

    2016-08-01

    Due to the uncertainty of the decision environment and the lack of knowledge, decision-makers may use uncertain linguistic preference relations to express their preferences over alternatives and criteria. For group decision-making problems with preference relations, it is important to consider the individual consistency and the group consensus before aggregating the preference information. In this paper, consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations (U2TLPRs) are investigated. First of all, a formula which can construct a consistent U2TLPR from the original preference relation is presented. Based on the consistent preference relation, the individual consistency index for a U2TLPR is defined. An iterative algorithm is then developed to improve the individual consistency of a U2TLPR. To help decision-makers reach consensus in group decision-making under uncertain linguistic environment, the individual consensus and group consensus indices for group decision-making with U2TLPRs are defined. Based on the two indices, an algorithm for consensus reaching in group decision-making with U2TLPRs is also developed. Finally, two examples are provided to illustrate the effectiveness of the proposed algorithms.

  4. Thermodynamically Consistent Coarse-Graining of Polymers

    NASA Astrophysics Data System (ADS)

    Guenza, Marina

    2015-03-01

    Structural and dynamical properties of macromolecular liquids, melts and mixtures, bridge an extensive range of length- and time-scales. For these systems, the computational limitations of the atomistic description prevent the study of the properties of interest and coarse-grained models remain the only viable approach. In coarse-grained models, structural and thermodynamic consistency across multiple length scales is essential for the predictive role of multi-scale modeling and molecular dynamic simulations that use mesoscale descriptions. This talk presents a coarse-graining approach that conserves structural and thermodynamic quantities independent of the extent of coarse-graining, and describes a model for the reconstruction of the dynamics measured in mesoscale simulations of the coarse-grained system. Some of the general challenges of preserving structural and thermodynamic consistency in coarse-grained models are discussed together with the conditions by which the problem is lessened. This material is based upon work partially supported by the National Science Foundation under Grant No. CHE-1362500.

  5. Toward an internally consistent pressure scale

    PubMed Central

    Fei, Yingwei; Ricolleau, Angele; Frank, Mark; Mibe, Kenji; Shen, Guoyin; Prakapenka, Vitali

    2007-01-01

    Our ability to interpret seismic observations including the seismic discontinuities and the density and velocity profiles in the earth's interior is critically dependent on the accuracy of pressure measurements up to 364 GPa at high temperature. Pressure scales based on the reduced shock-wave equations of state alone may predict pressure variations up to 7% in the megabar pressure range at room temperature and even higher percentage at high temperature, leading to large uncertainties in understanding the nature of the seismic discontinuities and chemical composition of the earth's interior. Here, we report compression data of gold (Au), platinum (Pt), the NaCl-B2 phase, and solid neon (Ne) at 300 K and high temperatures up to megabar pressures. Combined with existing experimental data, the compression data were used to establish internally consistent thermal equations of state of Au, Pt, NaCl-B2, and solid Ne. The internally consistent pressure scales provide a tractable, accurate baseline for comparing high pressure–temperature experimental data with theoretical calculations and the seismic observations, thereby advancing our understanding fundamental high-pressure phenomena and the chemistry and physics of the earth's interior. PMID:17483460

  6. Kinematically consistent models of viscoelastic stress evolution

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Meade, Brendan J.

    2016-05-01

    Following large earthquakes, coseismic stresses at the base of the seismogenic zone may induce rapid viscoelastic deformation in the lower crust and upper mantle. As stresses diffuse away from the primary slip surface in these lower layers, the magnitudes of stress at distant locations (>1 fault length away) may slowly increase. This stress relaxation process has been used to explain delayed earthquake triggering sequences like the 1992 Mw = 7.3 Landers and 1999 Mw = 7.1 Hector Mine earthquakes in California. However, a conceptual difficulty associated with these models is that the magnitudes of stresses asymptote to constant values over long time scales. This effect introduces persistent perturbations to the total stress field over many earthquake cycles. Here we present a kinematically consistent viscoelastic stress transfer model where the total perturbation to the stress field at the end of the earthquake cycle is zero everywhere. With kinematically consistent models, hypotheses about the potential likelihood of viscoelastically triggered earthquakes may be based on the timing of stress maxima, rather than on any arbitrary or empirically constrained stress thresholds. Based on these models, we infer that earthquakes triggered by viscoelastic earthquake cycle effects may be most likely to occur during the first 50% of the earthquake cycle regardless of the assumed long-term and transient viscosities.

  7. Consistent resolution of some relativistic quantum paradoxes

    SciTech Connect

    Griffiths, Robert B.

    2002-12-01

    A relativistic version of the (consistent or decoherent) histories approach to quantum theory is developed on the basis of earlier work by Hartle, and used to discuss relativistic forms of the paradoxes of spherical wave packet collapse, Bohm's formulation of the Einstein-Podolsky-Rosen paradox, and Hardy's paradox. It is argued that wave function collapse is not needed for introducing probabilities into relativistic quantum mechanics, and in any case should never be thought of as a physical process. Alternative approaches to stochastic time dependence can be used to construct a physical picture of the measurement process that is less misleading than collapse models. In particular, one can employ a coarse-grained but fully quantum-mechanical description in which particles move along trajectories, with behavior under Lorentz transformations the same as in classical relativistic physics, and detectors are triggered by particles reaching them along such trajectories. States entangled between spacelike separate regions are also legitimate quantum descriptions, and can be consistently handled by the formalism presented here. The paradoxes in question arise because of using modes of reasoning which, while correct for classical physics, are inconsistent with the mathematical structure of quantum theory, and are resolved (or tamed) by using a proper quantum analysis. In particular, there is no need to invoke, nor any evidence for, mysterious long-range superluminal influences, and thus no incompatibility, at least from this source, between relativity theory and quantum mechanics.

  8. Enredo and Pecan: Genome-wide mammalian consistency-based multiple alignment with paralogs

    PubMed Central

    Paten, Benedict; Herrero, Javier; Beal, Kathryn; Fitzgerald, Stephen; Birney, Ewan

    2008-01-01

    Pairwise whole-genome alignment involves the creation of a homology map, capable of performing a near complete transformation of one genome into another. For multiple genomes this problem is generalized to finding a set of consistent homology maps for converting each genome in the set of aligned genomes into any of the others. The problem can be divided into two principal stages. First, the partitioning of the input genomes into a set of colinear segments, a process which essentially deals with the complex processes of rearrangement. Second, the generation of a base pair level alignment map for each colinear segment. We have developed a new genome-wide segmentation program, Enredo, which produces colinear segments from extant genomes handling rearrangements, including duplications. We have then applied the new alignment program Pecan, which makes the consistency alignment methodology practical at a large scale, to create a new set of genome-wide mammalian alignments. We test both Enredo and Pecan using novel and existing assessment analyses that incorporate both real biological data and simulations, and show that both independently and in combination they outperform existing programs. Alignments from our pipeline are publicly available within the Ensembl genome browser. PMID:18849524

  9. Optimal consistency in microRNA expression analysis using reference-gene-based normalization.

    PubMed

    Wang, Xi; Gardiner, Erin J; Cairns, Murray J

    2015-05-01

    Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression. PMID:25797570

  10. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  11. FRESH-FRI-Based Single-Image Super-Resolution Algorithm.

    PubMed

    Wei, Xiaoyao; Dragotti, Pier Luigi

    2016-08-01

    In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels. PMID:27168595

  12. Reliability and Consistency of Surface Contamination Measurements

    SciTech Connect

    Rouppert, F.; Rivoallan, A.; Largeron, C.

    2002-02-26

    Surface contamination evaluation is a tough problem since it is difficult to isolate the radiations emitted by the surface, especially in a highly irradiating atmosphere. In that case the only possibility is to evaluate smearable (removeable) contamination since ex-situ countings are possible. Unfortunately, according to our experience at CEA, these values are not consistent and thus non relevant. In this study, we show, using in-situ Fourier Transform Infra Red spectrometry on contaminated metal samples, that fixed contamination seems to be chemisorbed and removeable contamination seems to be physisorbed. The distribution between fixed and removeable contamination appears to be variable. Chemical equilibria and reversible ion exchange mechanisms are involved and are closely linked to environmental conditions such as humidity and temperature. Measurements of smearable contamination only give an indication of the state of these equilibria between fixed and removeable contamination at the time and in the environmental conditions the measurements were made.

  13. Plasma Diffusion in Self-Consistent Fluctuations

    NASA Technical Reports Server (NTRS)

    Smets, R.; Belmont, G.; Aunai, N.

    2012-01-01

    The problem of particle diffusion in position space, as a consequence ofeleclromagnetic fluctuations is addressed. Numerical results obtained with a self-consistent hybrid code are presented, and a method to calculate diffusion coefficient in the direction perpendicular to the mean magnetic field is proposed. The diffusion is estimated for two different types of fluctuations. The first type (resuiting from an agyrotropic in itiai setting)is stationary, wide band white noise, and associated to Gaussian probability distribution function for the magnetic fluctuations. The second type (result ing from a Kelvin-Helmholtz instability) is non-stationary, with a power-law spectrum, and a non-Gaussian probabi lity distribution function. The results of the study allow revisiting the question of loading particles of solar wind origin in the Earth magnetosphere.

  14. Consistent evolution in a pedestrian flow

    NASA Astrophysics Data System (ADS)

    Guan, Junbiao; Wang, Kaihua

    2016-03-01

    In this paper, pedestrian evacuation considering different human behaviors is studied by using a cellular automaton (CA) model combined with the snowdrift game theory. The evacuees are divided into two types, i.e. cooperators and defectors, and two different human behaviors, herding behavior and independent behavior, are investigated. It is found from a large amount of numerical simulations that the ratios of the corresponding evacuee clusters are evolved to consistent states despite 11 typically different initial conditions, which may largely owe to self-organization effect. Moreover, an appropriate proportion of initial defectors who are of herding behavior, coupled with an appropriate proportion of initial defectors who are of rationally independent thinking, are two necessary factors for short evacuation time.

  15. Plasma diffusion in self-consistent fluctuations

    SciTech Connect

    Smets, R.; Belmont, G.; Aunai, N.; Rezeau, L.

    2011-10-15

    The problem of particle diffusion in position space, as a consequence of electromagnetic fluctuations is addressed. Numerical results obtained with a self-consistent hybrid code are presented, and a method to calculate diffusion coefficient in the direction perpendicular to the mean magnetic field is proposed. The diffusion is estimated for two different types of fluctuations. The first type (resulting from an agyrotropic initial setting) is stationary, wide band white noise, and associated to Gaussian probability distribution function for the magnetic fluctuations. The second type (resulting from a Kelvin-Helmholtz instability) is non-stationary, with a power-law spectrum, and a non-Gaussian probability distribution function. The results of the study allow revisiting the question of loading particles of solar wind origin in the Earth magnetosphere.

  16. Consistency of PT-symmetric quantum mechanics

    NASA Astrophysics Data System (ADS)

    Brody, Dorje C.

    2016-03-01

    In recent reports, suggestions have been put forward to the effect that parity and time-reversal (PT) symmetry in quantum mechanics is incompatible with causality. It is shown here, in contrast, that PT-symmetric quantum mechanics is fully consistent with standard quantum mechanics. This follows from the surprising fact that the much-discussed metric operator on Hilbert space is not physically observable. In particular, for closed quantum systems in finite dimensions there is no statistical test that one can perform on the outcomes of measurements to determine whether the Hamiltonian is Hermitian in the conventional sense, or PT-symmetric—the two theories are indistinguishable. Nontrivial physical effects arising as a consequence of PT symmetry are expected to be observed, nevertheless, for open quantum systems with balanced gain and loss.

  17. Quantum cosmological consistency condition for inflation

    SciTech Connect

    Calcagni, Gianluca; Kiefer, Claus; Steinwachs, Christian F. E-mail: kiefer@thp.uni-koeln.de

    2014-10-01

    We investigate the quantum cosmological tunneling scenario for inflationary models. Within a path-integral approach, we derive the corresponding tunneling probability distribution. A sharp peak in this distribution can be interpreted as the initial condition for inflation and therefore as a quantum cosmological prediction for its energy scale. This energy scale is also a genuine prediction of any inflationary model by itself, as the primordial gravitons generated during inflation leave their imprint in the B-polarization of the cosmic microwave background. In this way, one can derive a consistency condition for inflationary models that guarantees compatibility with a tunneling origin and can lead to a testable quantum cosmological prediction. The general method is demonstrated explicitly for the model of natural inflation.

  18. Toward a Fully Consistent Radiation Hydrodynamics

    SciTech Connect

    Castor, J I

    2009-07-07

    Dimitri Mihalas set the standard for all work in radiation hydrodynamics since 1984. The present contribution builds on 'Foundations of Radiation Hydrodynamics' to explore the relativistic effects that have prevented having a consistent non-relativistic theory. Much of what I have to say is in FRH, but the 3-D development is new. Results are presented for the relativistic radiation transport equation in the frame obtained by a Lorentz boost with the fluid velocity, and the exact momentum-integrated moment equations. The special-relativistic hydrodynamic equations are summarized, including the radiation contributions, and it is shown that exact conservation is obtained, and certain puzzles in the non-relativistic radhydro equations are explained.

  19. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  20. Quantifying consistent individual differences in habitat selection.

    PubMed

    Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie

    2016-03-01

    Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection. PMID:26597548

  1. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-07-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and -B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through one year of simultaneous nadir overpass (SNO) observations to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the longwave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both Polar and Tropical SNOs. The combined global SNO datasets indicate that, the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 comparison spectral regions and they range from 0.15 to 0.21 K in the remaining 4 spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  2. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-11-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark data sets for both intercalibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and MetOp-B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through simultaneous nadir overpass (SNO) observations in 2013, to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the long-wave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both polar and tropical SNOs. The combined global SNO data sets indicate that the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 spectral regions and they range from 0.15 to 0.21 K in the remaining four spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  3. Enhancing the synchronizability of networks by rewiring based on tabu search and a local greedy algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Cui-Li; Tang, Kit-Sang

    2011-12-01

    By considering the eigenratio of the Laplacian matrix as the synchronizability measure, this paper presents an efficient method to enhance the synchronizability of undirected and unweighted networks via rewiring. The rewiring method combines the use of tabu search and a local greedy algorithm so that an effective search of solutions can be achieved. As demonstrated in the simulation results, the performance of the proposed approach outperforms the existing methods for a large variety of initial networks, both in terms of speed and quality of solutions.

  4. A Heuristic Approach Based on Clarke-Wright Algorithm for Open Vehicle Routing Problem

    PubMed Central

    2013-01-01

    We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62). PMID:24382948

  5. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  6. Image watermarking using a dynamically weighted fuzzy c-means algorithm

    NASA Astrophysics Data System (ADS)

    Kang, Myeongsu; Ho, Linh Tran; Kim, Yongmin; Kim, Cheol Hong; Kim, Jong-Myon

    2011-10-01

    Digital watermarking has received extensive attention as a new method of protecting multimedia content from unauthorized copying. In this paper, we present a nonblind watermarking system using a proposed dynamically weighted fuzzy c-means (DWFCM) technique combined with discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) techniques for copyright protection. The proposed scheme efficiently selects blocks in which the watermark is embedded using new membership values of DWFCM as the embedding strength. We evaluated the proposed algorithm in terms of robustness against various watermarking attacks and imperceptibility compared to other algorithms [DWT-DCT-based and DCT- fuzzy c-means (FCM)-based algorithms]. Experimental results indicate that the proposed algorithm outperforms other algorithms in terms of robustness against several types of attacks, such as noise addition (Gaussian noise, salt and pepper noise), rotation, Gaussian low-pass filtering, mean filtering, median filtering, Gaussian blur, image sharpening, histogram equalization, and JPEG compression. In addition, the proposed algorithm achieves higher values of peak signal-to-noise ratio (approximately 49 dB) and lower values of measure-singular value decomposition (5.8 to 6.6) than other algorithms.

  7. Wavelet neural networks initialization using hybridized clustering and harmony search algorithm: Application in epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Zainuddin, Zarita; Lai, Kee Huong; Ong, Pauline

    2013-04-01

    Artificial neural networks (ANNs) are powerful mathematical models that are used to solve complex real world problems. Wavelet neural networks (WNNs), which were developed based on the wavelet theory, are a variant of ANNs. During the training phase of WNNs, several parameters need to be initialized; including the type of wavelet activation functions, translation vectors, and dilation parameter. The conventional k-means and fuzzy c-means clustering algorithms have been used to select the translation vectors. However, the solution vectors might get trapped at local minima. In this regard, the evolutionary harmony search algorithm, which is capable of searching for near-optimum solution vectors, both locally and globally, is introduced to circumvent this problem. In this paper, the conventional k-means and fuzzy c-means clustering algorithms were hybridized with the metaheuristic harmony search algorithm. In addition to obtaining the estimation of the global minima accurately, these hybridized algorithms also offer more than one solution to a particular problem, since many possible solution vectors can be generated and stored in the harmony memory. To validate the robustness of the proposed WNNs, the real world problem of epileptic seizure detection was presented. The overall classification accuracy from the simulation showed that the hybridized metaheuristic algorithms outperformed the standard k-means and fuzzy c-means clustering algorithms.

  8. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  9. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  10. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  11. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  12. Volume-based solvation models out-perform area-based models in combined studies of wild-type and mutated protein-protein interfaces

    PubMed Central

    Bougouffa, Salim; Warwicker, Jim

    2008-01-01

    Background Empirical binding models have previously been investigated for the energetics of protein complexation (ΔG models) and for the influence of mutations on complexation (i.e. differences between wild-type and mutant complexes, ΔΔG models). We construct binding models to directly compare these processes, which have generally been studied separately. Results Although reasonable fit models were found for both ΔG and ΔΔG cases, they differ substantially. In a dataset curated for the absence of mainchain rearrangement upon binding, non-polar area burial is a major determinant of ΔG models. However this ΔG model does not fit well to the data for binding differences upon mutation. Burial of non-polar area is weighted down in fitting of ΔΔG models. These calculations were made with no repacking of sidechains upon complexation, and only minimal packing upon mutation. We investigated the consequences of more extensive packing changes with a modified mean-field packing scheme. Rather than emphasising solvent exposure with relatively extended sidechains, rotamers are selected that exhibit maximal packing with protein. This provides solvent accessible areas for proteins that are much closer to those of experimental structures than the more extended sidechain regime. The new packing scheme increases changes in non-polar burial for mutants compared to wild-type proteins, but does not substantially improve agreement between ΔG and ΔΔG binding models. Conclusion We conclude that solvent accessible area, based on modelled mutant structures, is a poor correlate for ΔΔG upon mutation. A simple volume-based, rather than solvent accessibility-based, model is constructed for ΔG and ΔΔG systems. This shows a more consistent behaviour. We discuss the efficacy of volume, as opposed to area, approaches to describe the energetic consequences of mutations at interfaces. This knowledge can be used to develop simple computational screens for binding in comparative

  13. Field size consistency of nominally matched linacs.

    PubMed

    Kairn, T; Asena, A; Charles, P H; Hill, B; Langton, C M; Middlebrook, N D; Moylan, R; Trapp, J V

    2015-06-01

    Given that there is increasing recognition of the effect that sub-millimetre changes in collimator position can have on radiotherapy beam dosimetry, this study aimed to evaluate the potential variability in small field collimation that may exist between otherwise matched linacs. Field sizes and field output factors were measured using radiochromic film and an electron diode, for jaw- and MLC-collimated fields produced by eight dosimetrically matched Varian iX linacs (Varian Medical Systems, Palo Alto, USA). This study used nominal sizes from 0.6 × 0.6 to 10 × 10 cm(2), for jaw-collimated fields, and from 1 × 1 to 10 × 10 cm(2) for MLC-collimated fields, delivered from a zero (head up, beam directed vertically downward) gantry angle. Differences between the field sizes measured for the eight linacs exceeded the uncertainty of the film measurements and the repositioning uncertainty of the jaws and MLCs on one linac. The dimensions of fields defined by MLC leaves were more consistent between linacs, while also differing more from their nominal values than fields defined by orthogonal jaws. The field output factors measured for the different linacs generally increased with increasing measured field size for the nominal 0.6 × 0.6 to 1 × 1 cm(2) fields, and became consistent between linacs for nominal field sizes of 2 × 2 cm(2) and larger. The inclusion in radiotherapy treatment planning system beam data of small field output factors acquired in fields collimated by jaws (rather than the more-reproducible MLCs), associated with either the nominal or the measured field sizes, should be viewed with caution. The size and reproducibility of the fields (especially the small fields) used to acquire treatment planning data should be investigated thoroughly as part of the linac or planning system commissioning process. Further investigation of these issues, using different linac models, collimation systems and beam orientations, is recommended. PMID

  14. Performance Comparison of Attribute Set Reduction Algorithms in Stock Price Prediction - A Case Study on Indian Stock Data

    NASA Astrophysics Data System (ADS)

    Sivakumar, P. Bagavathi; Mohandas, V. P.

    Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.

  15. Searching for repeats, as an example of using the generalised Ruzzo-Tompa algorithm to find optimal subsequences with gaps.

    PubMed

    Spouge, John L; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L

    2014-01-01

    Some biological sequences contain subsequences of unusual composition; e.g. some proteins contain DNA binding domains, transmembrane regions and charged regions, and some DNA sequences contain repeats. The linear-time Ruzzo-Tompa (RT) algorithm finds subsequences of unusual composition, using a sequence of scores as input and the corresponding 'maximal segments' as output. In principle, permitting gaps in the output subsequences could improve sensitivity. Here, the input of the RT algorithm is generalised to a finite, totally ordered, weighted graph, so the algorithm locates paths of maximal weight through increasing but not necessarily adjacent vertices. By permitting the penalised deletion of unfavourable letters, the generalisation therefore includes gaps. The program RepWords, which finds inexact simple repeats in DNA, exemplifies the general concepts by out-performing a similar extant, ad hoc tool. With minimal programming effort, the generalised Ruzzo-Tompa algorithm could improve the performance of many programs for finding biological subsequences of unusual composition. PMID:24989859

  16. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  17. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  18. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    SciTech Connect

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plans in terms of average delay, number of stops, and vehicular emissions at the network level.

  19. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    DOE PAGESBeta

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less

  20. A Practical Stemming Algorithm for Online Search Assistance.

    ERIC Educational Resources Information Center

    Ulmschneider, John E.; Doszkocs, Tamas

    1983-01-01

    Describes a two-phase stemming algorithm which consists of word root identification and automatic selection of word variants starting with same word root from inverted file. Use of algorithm in book catalog file is discussed. Ten references and example of subject search are appended. (EJS)

  1. Consistent lattice Boltzmann equations for phase transitions.

    PubMed

    Siebert, D N; Philippi, P C; Mattila, K K

    2014-11-01

    Unlike conventional computational fluid dynamics methods, the lattice Boltzmann method (LBM) describes the dynamic behavior of fluids in a mesoscopic scale based on discrete forms of kinetic equations. In this scale, complex macroscopic phenomena like the formation and collapse of interfaces can be naturally described as related to source terms incorporated into the kinetic equations. In this context, a novel athermal lattice Boltzmann scheme for the simulation of phase transition is proposed. The continuous kinetic model obtained from the Liouville equation using the mean-field interaction force approach is shown to be consistent with diffuse interface model using the Helmholtz free energy. Density profiles, interface thickness, and surface tension are analytically derived for a plane liquid-vapor interface. A discrete form of the kinetic equation is then obtained by applying the quadrature method based on prescribed abscissas together with a third-order scheme for the discretization of the streaming or advection term in the Boltzmann equation. Spatial derivatives in the source terms are approximated with high-order schemes. The numerical validation of the method is performed by measuring the speed of sound as well as by retrieving the coexistence curve and the interface density profiles. The appearance of spurious currents near the interface is investigated. The simulations are performed with the equations of state of Van der Waals, Redlich-Kwong, Redlich-Kwong-Soave, Peng-Robinson, and Carnahan-Starling. PMID:25493907

  2. Self consistency grouping: a stringent clustering method

    PubMed Central

    2012-01-01

    Background Numerous types of clustering like single linkage and K-means have been widely studied and applied to a variety of scientific problems. However, the existing methods are not readily applicable for the problems that demand high stringency. Methods Our method, self consistency grouping, i.e. SCG, yields clusters whose members are closer in rank to each other than to any member outside the cluster. We do not define a distance metric; we use the best known distance metric and presume that it measures the correct distance. SCG does not impose any restriction on the size or the number of the clusters that it finds. The boundaries of clusters are determined by the inconsistencies in the ranks. In addition to the direct implementation that finds the complete structure of the (sub)clusters we implemented two faster versions. The fastest version is guaranteed to find only the clusters that are not subclusters of any other clusters and the other version yields the same output as the direct implementation but does so more efficiently. Results Our tests have demonstrated that SCG yields very few false positives. This was accomplished by introducing errors in the distance measurement. Clustering of protein domain representatives by structural similarity showed that SCG could recover homologous groups with high precision. Conclusions SCG has potential for finding biological relationships under stringent conditions. PMID:23320864

  3. Trisomy 21 consistently activates the interferon response

    PubMed Central

    Sullivan, Kelly D; Lewis, Hannah C; Hill, Amanda A; Pandey, Ahwan; Jackson, Leisa P; Cabral, Joseph M; Smith, Keith P; Liggett, L Alexander; Gomez, Eliana B; Galbraith, Matthew D; DeGregori, James; Espinosa, Joaquín M

    2016-01-01

    Although it is clear that trisomy 21 causes Down syndrome, the molecular events acting downstream of the trisomy remain ill defined. Using complementary genomics analyses, we identified the interferon pathway as the major signaling cascade consistently activated by trisomy 21 in human cells. Transcriptome analysis revealed that trisomy 21 activates the interferon transcriptional response in fibroblast and lymphoblastoid cell lines, as well as circulating monocytes and T cells. Trisomy 21 cells show increased induction of interferon-stimulated genes and decreased expression of ribosomal proteins and translation factors. An shRNA screen determined that the interferon-activated kinases JAK1 and TYK2 suppress proliferation of trisomy 21 fibroblasts, and this defect is rescued by pharmacological JAK inhibition. Therefore, we propose that interferon activation, likely via increased gene dosage of the four interferon receptors encoded on chromosome 21, contributes to many of the clinical impacts of trisomy 21, and that interferon antagonists could have therapeutic benefits. DOI: http://dx.doi.org/10.7554/eLife.16220.001 PMID:27472900

  4. Ciliate communities consistently associated with coral diseases

    NASA Astrophysics Data System (ADS)

    Sweet, M. J.; Séré, M. G.

    2016-07-01

    Incidences of coral disease are increasing. Most studies which focus on diseases in these organisms routinely assess variations in bacterial associates. However, other microorganism groups such as viruses, fungi and protozoa are only recently starting to receive attention. This study aimed at assessing the diversity of ciliates associated with coral diseases over a wide geographical range. Here we show that a wide variety of ciliates are associated with all nine coral diseases assessed. Many of these ciliates such as Trochilia petrani and Glauconema trihymene feed on the bacteria which are likely colonizing the bare skeleton exposed by the advancing disease lesion or the necrotic tissue itself. Others such as Pseudokeronopsis and Licnophora macfarlandi are common predators of other protozoans and will be attracted by the increase in other ciliate species to the lesion interface. However, a few ciliate species (namely Varistrombidium kielum, Philaster lucinda, Philaster guamense, a Euplotes sp., a Trachelotractus sp. and a Condylostoma sp.) appear to harbor symbiotic algae, potentially from the coral themselves, a result which may indicate that they play some role in the disease pathology at the very least. Although, from this study alone we are not able to discern what roles any of these ciliates play in disease causation, the consistent presence of such communities with disease lesion interfaces warrants further investigation.

  5. A Consistent Phylogenetic Backbone for the Fungi

    PubMed Central

    Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt

    2012-01-01

    The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data—a common practice in phylogenomic analyses—introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356

  6. A consistent phylogenetic backbone for the fungi.

    PubMed

    Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt

    2012-05-01

    The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data-a common practice in phylogenomic analyses-introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356

  7. Classification of urban vegetation patterns from hyperspectral imagery: hybrid algorithm based on genetic algorithm tuned fuzzy support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Mandi; Shu, Jiong; Chen, Zhigang; Ji, Minhe

    2012-11-01

    Hyperspectral imagery has been widely used in terrain classification for its high resolution. Urban vegetation, known as an essential part of the urban ecosystem, can be difficult to discern due to high similarity of spectral signatures among some land-cover classes. In this paper, we investigate a hybrid approach of the genetic-algorithm tuned fuzzy support vector machine (GA-FSVM) technique and apply it to urban vegetation classification from aerial hyperspectral urban imagery. The approach adopts the genetic algorithm to optimize parameters of support vector machine, and employs the K-nearest neighbor algorithm to calculate the membership function for each fuzzy parameter, aiming to reduce the effects of the isolated and noisy samples. Test data come from push-broom hyperspectral imager (PHI) hyperspectral remote sensing image which partially covers a corner of the Shanghai World Exposition Park, while PHI is a hyper-spectral sensor developed by Shanghai Institute of Technical Physics. Experimental results show the GA-FSVM model generates overall accuracy of 71.2%, outperforming the maximum likelihood classifier with 49.4% accuracy and the artificial neural network method with 60.8% accuracy. It indicates GA-FSVM is a promising model for vegetation classification from hyperspectral urban data, and has good advantage in the application of classification involving abundant mixed pixels and small samples problem.

  8. The generalized frequency-domain adaptive filtering algorithm as an approximation of the block recursive least-squares algorithm

    NASA Astrophysics Data System (ADS)

    Schneider, Martin; Kellermann, Walter

    2016-01-01

    Acoustic echo cancellation (AEC) is a well-known application of adaptive filters in communication acoustics. To implement AEC for multichannel reproduction systems, powerful adaptation algorithms like the generalized frequency-domain adaptive filtering (GFDAF) algorithm are required for satisfactory convergence behavior. In this paper, the GFDAF algorithm is rigorously derived as an approximation of the block recursive least-squares (RLS) algorithm. Thereby, the original formulation of the GFDAF algorithm is generalized while avoiding an error that has been in the original derivation. The presented algorithm formulation is applied to pruned transform-domain loudspeaker-enclosure-microphone models in a mathematically consistent manner. Such pruned models have recently been proposed to cope with the tremendous computational demands of massive multichannel AEC. Beyond its generalization, a regularization of the GFDAF is shown to have a close relation to the well-known block least-mean-squares algorithm.

  9. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  10. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  11. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  12. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  13. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  14. Iterative algorithms for tridiagonal matrices on a WSI-multiprocessor

    NASA Astrophysics Data System (ADS)

    Gajski, D. D.; Sameh, A. H.; Wisniewski, J. A.

    With the rapid advances in semiconductor technology, the construction of Wafer Scale Integration (WSI)-multiprocessors consisting of a large number of processors is now feasible. The implementation of some basis linear algebra algorithms on such multiprocessors is illustrated.

  15. Interpreting the flock algorithm from a statistical perspective.

    PubMed

    Anderson, Eric C; Barry, Patrick D

    2015-09-01

    We show that the algorithm in the program flock (Duchesne & Turgeon 2009) can be interpreted as an estimation procedure based on a model essentially identical to the structure (Pritchard et al. 2000) model with no admixture and without correlated allele frequency priors. Rather than using MCMC, the flock algorithm searches for the maximum a posteriori estimate of this structure model via a simulated annealing algorithm with a rapid cooling schedule (namely, the exponent on the objective function →∞). We demonstrate the similarities between the two programs in a two-step approach. First, to enable rapid batch processing of many simulated data sets, we modified the source code of structure to use the flock algorithm, producing the program flockture. With simulated data, we confirmed that results obtained with flock and flockture are very similar (though flockture is some 200 times faster). Second, we simulated multiple large data sets under varying levels of population differentiation for both microsatellite and SNP genotypes. We analysed them with flockture and structure and assessed each program on its ability to cluster individuals to their correct subpopulation. We show that flockture yields results similar to structure albeit with greater variability from run to run. flockture did perform better than structure when genotypes were composed of SNPs and differentiation was moderate (FST= 0.022-0.032). When differentiation was low, structure outperformed flockture for both marker types. On large data sets like those we simulated, it appears that flock's reliance on inference rules regarding its 'plateau record' is not helpful. Interpreting flock's algorithm as a special case of the model in structure should aid in understanding the program's output and behaviour. PMID:25913195

  16. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR. PMID:25517208

  19. Geometrically consistent approach to stochastic DBI inflation

    SciTech Connect

    Lorenz, Larissa; Martin, Jerome; Yokoyama, Jun'ichi

    2010-07-15

    Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.

  20. System engineering approach to GPM retrieval algorithms

    SciTech Connect

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  1. Multi-modal robust inverse-consistent linear registration.

    PubMed

    Wachinger, Christian; Golland, Polina; Magnain, Caroline; Fischl, Bruce; Reuter, Martin

    2015-04-01

    Registration performance can significantly deteriorate when image regions do not comply with model assumptions. Robust estimation improves registration accuracy by reducing or ignoring the contribution of voxels with large intensity differences, but existing approaches are limited to monomodal registration. In this work, we propose a robust and inverse-consistent technique for cross-modal, affine image registration. The algorithm is derived from a contextual framework of image registration. The key idea is to use a modality invariant representation of images based on local entropy estimation, and to incorporate a heteroskedastic noise model. This noise model allows us to draw the analogy to iteratively reweighted least squares estimation and to leverage existing weighting functions to account for differences in local information content in multimodal registration. Furthermore, we use the nonparametric windows density estimator to reliably calculate entropy of small image patches. Finally, we derive the Gauss-Newton update and show that it is equivalent to the efficient second-order minimization for the fully symmetric registration approach. We illustrate excellent performance of the proposed methods on datasets containing outliers for alignment of brain tumor, full head, and histology images. PMID:25470798

  2. Fully consistent CFD methods for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

    2014-06-01

    Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

  3. A new mixed self-consistent field procedure

    NASA Astrophysics Data System (ADS)

    Alvarez-Ibarra, A.; Köster, A. M.

    2015-10-01

    A new approach for the calculation of three-centre electronic repulsion integrals (ERIs) is developed, implemented and benchmarked in the framework of auxiliary density functional theory (ADFT). The so-called mixed self-consistent field (mixed SCF) divides the computationally costly ERIs in two sets: far-field and near-field. Far-field ERIs are calculated using the newly developed double asymptotic expansion as in the direct SCF scheme. Near-field ERIs are calculated only once prior to the SCF procedure and stored in memory, as in the conventional SCF scheme. Hence the name, mixed SCF. The implementation is particularly powerful when used in parallel architectures, since all RAM available are used for near-field ERI storage. In addition, the efficient distribution algorithm performs minimal intercommunication operations between processors, avoiding a potential bottleneck. One-, two- and three-dimensional systems are used for benchmarking, showing substantial time reduction in the ERI calculation for all of them. A Born-Oppenheimer molecular dynamics calculation for the Na+55 cluster is also shown in order to demonstrate the speed-up for small systems achievable with the mixed SCF. Dedicated to Sourav Pal on the occasion of his 60th birthday.

  4. Multi-Modal Robust Inverse-Consistent Linear Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Magnain, Caroline; Fischl, Bruce; Reuter, Martin

    2016-01-01

    Registration performance can significantly deteriorate when image regions do not comply with model assumptions. Robust estimation improves registration accuracy by reducing or ignoring the contribution of voxels with large intensity differences, but existing approaches are limited to monomodal registration. In this work, we propose a robust and inverse-consistent technique for crossmodal, affine image registration. The algorithm is derived from a contextual framework of image registration. The key idea is to use a modality invariant representation of images based on local entropy estimation, and to incorporate a heteroskedastic noise model. This noise model allows us to draw the analogy to iteratively reweighted least squares estimation and to leverage existing weighting functions to account for differences in local information content in multimodal registration. Furthermore, we use the nonparametric windows density estimator to reliably calculate entropy of small image patches. Finally, we derive the Gauss–Newton update and show that it is equivalent to the efficient secondorder minimization for the fully symmetric registration approach. We illustrate excellent performance of the proposed methods on datasets containing outliers for alignment of brain tumor, full head, and histology images. PMID:25470798

  5. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi

  6. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  7. Algorithm for Finding Similar Shapes in Large Molecular Structures Libraries

    1994-10-19

    The SHAPES software consists of methods and algorithms for representing and rapidly comparing molecular shapes. Molecular shapes algorithms are a class of algorithm derived and applied for recognizing when two three-dimensional shapes share common features. They proceed from the notion that the shapes to be compared are regions in three-dimensional space. The algorithms allow recognition of when localized subregions from two or more different shapes could never be superimposed by any rigid-body motion. Rigid-body motionsmore » are arbitrary combinations of translations and rotations.« less

  8. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  9. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent

    PubMed Central

    Kowalewski, Christopher; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  10. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent.

    PubMed

    Hoffmann, Matthias; Kowalewski, Christopher; Maier, Andreas; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  11. Filtering algorithm for dotted interferences

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.

    2011-09-01

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  12. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

    PubMed Central

    Yurtkuran, Alkın

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  13. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  14. A Computationally Efficient Mel-Filter Bank VAD Algorithm for Distributed Speech Recognition Systems

    NASA Astrophysics Data System (ADS)

    Vlaj, Damjan; Kotnik, Bojan; Horvat, Bogomir; Kačič, Zdravko

    2005-12-01

    This paper presents a novel computationally efficient voice activity detection (VAD) algorithm and emphasizes the importance of such algorithms in distributed speech recognition (DSR) systems. When using VAD algorithms in telecommunication systems, the required capacity of the speech transmission channel can be reduced if only the speech parts of the signal are transmitted. A similar objective can be adopted in DSR systems, where the nonspeech parameters are not sent over the transmission channel. A novel approach is proposed for VAD decisions based on mel-filter bank (MFB) outputs with the so-called Hangover criterion. Comparative tests are presented between the presented MFB VAD algorithm and three VAD algorithms used in the G.729, G.723.1, and DSR (advanced front-end) Standards. These tests were made on the Aurora 2 database, with different signal-to-noise (SNRs) ratios. In the speech recognition tests, the proposed MFB VAD outperformed all the three VAD algorithms used in the standards by [InlineEquation not available: see fulltext.] relative (G.723.1 VAD), by [InlineEquation not available: see fulltext.] relative (G.729 VAD), and by [InlineEquation not available: see fulltext.] relative (DSR VAD) in all SNRs.

  15. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  16. Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems

    NASA Astrophysics Data System (ADS)

    Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing

    2014-01-01

    Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.

  17. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  18. A hybrid algorithm for robust acoustic source localization in noisy and reverberant environments

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Ramesh; Dessonville, Timothy

    2014-09-01

    Acoustic source localization using microphone arrays is widely used in videoconferencing and surveillance systems. However, it still remains a challenging task to develop efficient algorithms for accurate estimation of source location using distributed data processing. In this work, we propose a new algorithm for efficient localization of a speaker in noisy and reverberant environments such as videoconferencing. We propose a hybrid algorithm that combines generalized cross correlation based phase transform method (GCC-PHAT) and Tabu search to obtain a robust and accurate estimate of the speaker location. The Tabu Search algorithm iteratively improves the time difference of arrival (TDOA) estimate of GCC-PHAT by examining the neighboring solutions until a convergence in the TDOA value is obtained. Experiments were performed based on real world data recorded from a meeting room in the presence of noise such as computer and fans. Our results demonstrate that the proposed hybrid algorithm outperforms GCC-PHAT especially when the noise level is high. This shows the robustness of the proposed algorithm in noisy and realistic videoconferencing systems.

  19. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm.

    PubMed

    Iyer, Swathi P; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel T; Fair, Damien A

    2013-07-15

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al. (2011), and apply PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  20. A MPR optimization algorithm for FSO communication system with star topology

    NASA Astrophysics Data System (ADS)

    Zhao, Linlin; Chi, Xuefen; Li, Peng; Guan, Lin

    2015-12-01

    In this paper, we introduce the multi-packet reception (MPR) technology to the outdoor free space optical (FSO) communication system to provide excellent throughput gain. Hence, we address two challenges: how to realize the MPR technology in the varying atmospheric turbulence channel and how to adjust the MPR capability to support as many devices transmitting simultaneously as possible in the system with bit error rate (BER) constraints. Firstly, we explore the reliability ordering with minimum mean square error successive interference cancellation (RO-MMSE-SIC) algorithm to realize the MPR technology in the FSO communication system and derive the closed-form BER expression of the RO-MMSE-SIC algorithm. Then, based on the derived BER expression, we propose the adaptive MPR capability optimization algorithm so that the MPR capability is adapted to different turbulence channel states. Consequently, the excellent throughput gain is obtained in the varying atmospheric channel. The simulation results show that our RO-MMSE-SIC algorithm outperforms the conventional MMSE-SIC algorithm. And the derived exact BER expression is verified by Monte Carlo simulations. The validity and the indispensability of the proposed adaptive MPR capability optimization algorithm are verified as well.