NASA Astrophysics Data System (ADS)
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R
2016-07-07
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Application of the DMRG in two dimensions: a parallel tempering algorithm
NASA Astrophysics Data System (ADS)
Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian
The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).
Implementing the SU(2) Symmetry for the DMRG
NASA Astrophysics Data System (ADS)
Alvarez, Gonzalo
2010-03-01
In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992), Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This talk will explain how the DMRG++ codefootnotetextarXiv:0902.3185 or Computer Physics Communications 180 (2009) 1572-1578. has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries will be discussed for typical tight-binding models of strongly correlated electronic systems. The computational bottleneck of the algorithm, and the use of shared memory parallelization will also be addressed. Finally, a roadmap for future work on DMRG++ will be presented.
Implementation of the SU(2) Hamiltonian Symmetry for the DMRG Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Gonzalo
2012-01-01
In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992, 1993) and Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This paper explains how the the DMRG++ code (Alvarez, 2009) has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries are discussed for the one-orbital Hubbard model, and for a two-orbital Hubbard model for iron-based superconductors. The computational bottleneck of the algorithm and themore » use of shared memory parallelization are also addressed.« less
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.
Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L
2017-12-13
We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).
Obtaining highly excited eigenstates of the localized XX chain via DMRG-X
NASA Astrophysics Data System (ADS)
Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.
2017-10-01
We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.
The density-matrix renormalization group: a short introduction.
Schollwöck, Ulrich
2011-07-13
The density-matrix renormalization group (DMRG) method has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. The DMRG is a method that shares features of a renormalization group procedure (which here generates a flow in the space of reduced density operators) and of a variational method that operates on a highly interesting class of quantum states, so-called matrix product states (MPSs). The DMRG method is presented here entirely in the MPS language. While the DMRG generally fails in larger two-dimensional systems, the MPS picture suggests a straightforward generalization to higher dimensions in the framework of tensor network states. The resulting algorithms, however, suffer from difficulties absent in one dimension, apart from a much more unfavourable efficiency, such that their ultimate success remains far from clear at the moment.
Matrix-Product-State Algorithm for Finite Fractional Quantum Hall Systems
NASA Astrophysics Data System (ADS)
Liu, Zhao; Bhatt, R. N.
2015-09-01
Exact diagonalization is a powerful tool to study fractional quantum Hall (FQH) systems. However, its capability is limited by the exponentially increasing computational cost. In order to overcome this difficulty, density-matrix-renormalization-group (DMRG) algorithms were developed for much larger system sizes. Very recently, it was realized that some model FQH states have exact matrix-product-state (MPS) representation. Motivated by this, here we report a MPS code, which is closely related to, but different from traditional DMRG language, for finite FQH systems on the cylinder geometry. By representing the many-body Hamiltonian as a matrix-product-operator (MPO) and using single-site update and density matrix correction, we show that our code can efficiently search the ground state of various FQH systems. We also compare the performance of our code with traditional DMRG. The possible generalization of our code to infinite FQH systems and other physical systems is also discussed.
Unifying time evolution and optimization with matrix product states
NASA Astrophysics Data System (ADS)
Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank
2016-10-01
We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.
Symmetry-conserving purification of quantum states within the density matrix renormalization group
Nocera, Alberto; Alvarez, Gonzalo
2016-01-28
The density matrix renormalization group (DMRG) algorithm was originally designed to efficiently compute the zero-temperature or ground-state properties of one-dimensional strongly correlated quantum systems. The development of the algorithm at finite temperature has been a topic of much interest, because of the usefulness of thermodynamics quantities in understanding the physics of condensed matter systems, and because of the increased complexity associated with efficiently computing temperature-dependent properties. The ancilla method is a DMRG technique that enables the computation of these thermodynamic quantities. In this paper, we review the ancilla method, and improve its performance by working on reduced Hilbert spaces andmore » using canonical approaches. Furthermore we explore its applicability beyond spins systems to t-J and Hubbard models.« less
A state interaction spin-orbit coupling density matrix renormalization group method
NASA Astrophysics Data System (ADS)
Sayfutyarova, Elvira R.; Chan, Garnet Kin-Lic
2016-06-01
We describe a state interaction spin-orbit (SISO) coupling method using density matrix renormalization group (DMRG) wavefunctions and the spin-orbit mean-field (SOMF) operator. We implement our DMRG-SISO scheme using a spin-adapted algorithm that computes transition density matrices between arbitrary matrix product states. To demonstrate the potential of the DMRG-SISO scheme we present accurate benchmark calculations for the zero-field splitting of the copper and gold atoms, comparing to earlier complete active space self-consistent-field and second-order complete active space perturbation theory results in the same basis. We also compute the effects of spin-orbit coupling on the spin-ladder of the iron-sulfur dimer complex [Fe2S2(SCH3)4]3-, determining the splitting of the lowest quartet and sextet states. We find that the magnitude of the zero-field splitting for the higher quartet and sextet states approaches a significant fraction of the Heisenberg exchange parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roemelt, Michael, E-mail: michael.roemelt@theochem.rub.de
Spin Orbit Coupling (SOC) is introduced to molecular ab initio density matrix renormalization group (DMRG) calculations. In the presented scheme, one first approximates the electronic ground state and a number of excited states of the Born-Oppenheimer (BO) Hamiltonian with the aid of the DMRG algorithm. Owing to the spin-adaptation of the algorithm, the total spin S is a good quantum number for these states. After the non-relativistic DMRG calculation is finished, all magnetic sublevels of the calculated states are constructed explicitly, and the SOC operator is expanded in the resulting basis. To this end, spin orbit coupled energies and wavefunctionsmore » are obtained as eigenvalues and eigenfunctions of the full Hamiltonian matrix which is composed of the SOC operator matrix and the BO Hamiltonian matrix. This treatment corresponds to a quasi-degenerate perturbation theory approach and can be regarded as the molecular equivalent to atomic Russell-Saunders coupling. For the evaluation of SOC matrix elements, the full Breit-Pauli SOC Hamiltonian is approximated by the widely used spin-orbit mean field operator. This operator allows for an efficient use of the second quantized triplet replacement operators that are readily generated during the non-relativistic DMRG algorithm, together with the Wigner-Eckart theorem. With a set of spin-orbit coupled wavefunctions at hand, the molecular g-tensors are calculated following the scheme proposed by Gerloch and McMeeking. It interprets the effective molecular g-values as the slope of the energy difference between the lowest Kramers pair with respect to the strength of the applied magnetic field. Test calculations on a chemically relevant Mo complex demonstrate the capabilities of the presented method.« less
NASA Astrophysics Data System (ADS)
Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter
2014-06-01
In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
A state interaction spin-orbit coupling density matrix renormalization group method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayfutyarova, Elvira R.; Chan, Garnet Kin-Lic
We describe a state interaction spin-orbit (SISO) coupling method using density matrix renormalization group (DMRG) wavefunctions and the spin-orbit mean-field (SOMF) operator. We implement our DMRG-SISO scheme using a spin-adapted algorithm that computes transition density matrices between arbitrary matrix product states. To demonstrate the potential of the DMRG-SISO scheme we present accurate benchmark calculations for the zero-field splitting of the copper and gold atoms, comparing to earlier complete active space self-consistent-field and second-order complete active space perturbation theory results in the same basis. We also compute the effects of spin-orbit coupling on the spin-ladder of the iron-sulfur dimer complex [Fe{submore » 2}S{sub 2}(SCH{sub 3}){sub 4}]{sup 3−}, determining the splitting of the lowest quartet and sextet states. We find that the magnitude of the zero-field splitting for the higher quartet and sextet states approaches a significant fraction of the Heisenberg exchange parameter.« less
An efficient matrix product operator representation of the quantum chemical Hamiltonian
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Sebastian, E-mail: sebastian.keller@phys.chem.ethz.ch; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch; Dolfi, Michele, E-mail: dolfim@phys.ethz.ch
We describe how to efficiently construct the quantum chemical Hamiltonian operator in matrix product form. We present its implementation as a density matrix renormalization group (DMRG) algorithm for quantum chemical applications. Existing implementations of DMRG for quantum chemistry are based on the traditional formulation of the method, which was developed from the point of view of Hilbert space decimation and attained higher performance compared to straightforward implementations of matrix product based DMRG. The latter variationally optimizes a class of ansatz states known as matrix product states, where operators are correspondingly represented as matrix product operators (MPOs). The MPO construction schememore » presented here eliminates the previous performance disadvantages while retaining the additional flexibility provided by a matrix product approach, for example, the specification of expectation values becomes an input parameter. In this way, MPOs for different symmetries — abelian and non-abelian — and different relativistic and non-relativistic models may be solved by an otherwise unmodified program.« less
Extending the range of real time density matrix renormalization group simulations
NASA Astrophysics Data System (ADS)
Kennes, D. M.; Karrasch, C.
2016-03-01
We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.
The time-dependent density matrix renormalisation group method
NASA Astrophysics Data System (ADS)
Ma, Haibo; Luo, Zhen; Yao, Yao
2018-04-01
Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.
NASA Astrophysics Data System (ADS)
Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.
2017-12-01
The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.
Lanczos algorithm with matrix product states for dynamical correlation functions
NASA Astrophysics Data System (ADS)
Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.
2012-05-01
The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
NASA Astrophysics Data System (ADS)
Karrasch, C.; Hauschild, J.; Langer, S.; Heidrich-Meisner, F.
2013-06-01
We revisit the problem of the spin Drude weight D of the integrable spin-1/2 XXZ chain using two complementary approaches, exact diagonalization (ED) and the time-dependent density-matrix renormalization group (tDMRG). We pursue two main goals. First, we present extensive results for the temperature dependence of D. By exploiting time translation invariance within tDMRG, one can extract D for significantly lower temperatures than in previous tDMRG studies. Second, we discuss the numerical quality of the tDMRG data and elaborate on details of the finite-size scaling of the ED results, comparing calculations carried out in the canonical and grand-canonical ensembles. Furthermore, we analyze the behavior of the Drude weight as the point with SU(2)-symmetric exchange is approached and discuss the relative contribution of the Drude weight to the sum rule as a function of temperature.
Kurashige, Yuki; Yanai, Takeshi
2011-09-07
We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Prodhan, Suryoday; Ramasesha, S.
2018-05-01
The symmetry adapted density matrix renormalization group (SDMRG) technique has been an efficient method for studying low-lying eigenstates in one- and quasi-one-dimensional electronic systems. However, the SDMRG method had bottlenecks involving the construction of linearly independent symmetry adapted basis states as the symmetry matrices in the DMRG basis were not sparse. We have developed a modified algorithm to overcome this bottleneck. The new method incorporates end-to-end interchange symmetry (C2) , electron-hole symmetry (J ) , and parity or spin-flip symmetry (P ) in these calculations. The one-to-one correspondence between direct-product basis states in the DMRG Hilbert space for these symmetry operations renders the symmetry matrices in the new basis with maximum sparseness, just one nonzero matrix element per row. Using methods similar to those employed in the exact diagonalization technique for Pariser-Parr-Pople (PPP) models, developed in the 1980s, it is possible to construct orthogonal SDMRG basis states while bypassing the slow step of the Gram-Schmidt orthonormalization procedure. The method together with the PPP model which incorporates long-range electronic correlations is employed to study the correlated excited-state spectra of 1,12-benzoperylene and a narrow mixed graphene nanoribbon with a chrysene molecule as the building unit, comprising both zigzag and cove-edge structures.
Roemelt, Michael; Krewald, Vera; Pantazis, Dimitrios A
2018-01-09
The accurate description of magnetic level energetics in oligonuclear exchange-coupled transition-metal complexes remains a formidable challenge for quantum chemistry. The density matrix renormalization group (DMRG) brings such systems for the first time easily within reach of multireference wave function methods by enabling the use of unprecedentedly large active spaces. But does this guarantee systematic improvement in predictive ability and, if so, under which conditions? We identify operational parameters in the use of DMRG using as a test system an experimentally characterized mixed-valence bis-μ-oxo/μ-acetato Mn(III,IV) dimer, a model for the oxygen-evolving complex of photosystem II. A complete active space of all metal 3d and bridge 2p orbitals proved to be the smallest meaningful starting point; this is readily accessible with DMRG and greatly improves on the unrealistic metal-only configuration interaction or complete active space self-consistent field (CASSCF) values. Orbital optimization is critical for stabilizing the antiferromagnetic state, while a state-averaged approach over all spin states involved is required to avoid artificial deviations from isotropic behavior that are associated with state-specific calculations. Selective inclusion of localized orbital subspaces enables probing the relative contributions of different ligands and distinct superexchange pathways. Overall, however, full-valence DMRG-CASSCF calculations fall short of providing a quantitative description of the exchange coupling owing to insufficient recovery of dynamic correlation. Quantitatively accurate results can be achieved through a DMRG implementation of second order N-electron valence perturbation theory (NEVPT2) in conjunction with a full-valence metal and ligand active space. Perspectives for future applications of DMRG-CASSCF/NEVPT2 to exchange coupling in oligonuclear clusters are discussed.
Veis, Libor; Antalík, Andrej; Brabec, Jiří; Neese, Frank; Legeza, Örs; Pittner, Jiří
2016-10-03
In the past decade, the quantum chemical version of the density matrix renormalization group (DMRG) method has established itself as the method of choice for calculations of strongly correlated molecular systems. Despite its favorable scaling, it is in practice not suitable for computations of dynamic correlation. We present a novel method for accurate "post-DMRG" treatment of dynamic correlation based on the tailored coupled cluster (CC) theory in which the DMRG method is responsible for the proper description of nondynamic correlation, whereas dynamic correlation is incorporated through the framework of the CC theory. We illustrate the potential of this method on prominent multireference systems, in particular, N 2 and Cr 2 molecules and also oxo-Mn(Salen), for which we have performed the first post-DMRG computations in order to shed light on the energy ordering of the lowest spin states.
Nocera, Alberto; Wang, Yan; Patel, Niravkumar D.; ...
2018-05-31
Here, we study the magnetic and charge dynamical response of a Hubbard model in a two-leg ladder geometry using the density matrix renormalization group (DMRG) method and the random phase approximation within the fluctuation-exchange approximation (FLEX). Our calculations reveal that FLEX can capture the main features of the magnetic response from weak up to intermediate Hubbard repulsion for doped ladders, when compared with the numerically exact DMRG results. However, while at weak Hubbard repulsion both the spin and charge spectra can be understood in terms of weakly interacting electron-hole excitations across the Fermi surface, at intermediate coupling DMRG shows gappedmore » spin excitations at large momentum transfer that remain gapless within the FLEX approximation. For the charge response, FLEX can only reproduce the main features of the DMRG spectra at weak coupling and high doping levels, while it shows an incoherent character away from this limit. Overall, our analysis shows that FLEX works surprisingly well for spin excitations at weak and intermediate Hubbard U values even in the difficult low-dimensional geometry such as a two-leg ladder. Finally, we discuss the implications of our results for neutron scattering and resonant inelastic x-ray scattering experiments on two-leg ladder cuprate compounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nocera, Alberto; Wang, Yan; Patel, Niravkumar D.
Here, we study the magnetic and charge dynamical response of a Hubbard model in a two-leg ladder geometry using the density matrix renormalization group (DMRG) method and the random phase approximation within the fluctuation-exchange approximation (FLEX). Our calculations reveal that FLEX can capture the main features of the magnetic response from weak up to intermediate Hubbard repulsion for doped ladders, when compared with the numerically exact DMRG results. However, while at weak Hubbard repulsion both the spin and charge spectra can be understood in terms of weakly interacting electron-hole excitations across the Fermi surface, at intermediate coupling DMRG shows gappedmore » spin excitations at large momentum transfer that remain gapless within the FLEX approximation. For the charge response, FLEX can only reproduce the main features of the DMRG spectra at weak coupling and high doping levels, while it shows an incoherent character away from this limit. Overall, our analysis shows that FLEX works surprisingly well for spin excitations at weak and intermediate Hubbard U values even in the difficult low-dimensional geometry such as a two-leg ladder. Finally, we discuss the implications of our results for neutron scattering and resonant inelastic x-ray scattering experiments on two-leg ladder cuprate compounds.« less
Improving the efficiency of the Finite Temperature Density Matrix Renormalization Group method
NASA Astrophysics Data System (ADS)
Nocera, Alberto; Alvarez, Gonzalo
I review the basics of the finite temperature DMRG method, and then show how its efficiency can be improved by working on reduced Hilbert spaces and by using canonical approaches. My talk explains the applicability of the ancilla DMRG method beyond spins systems to t-J and Hubbard models, and addresses the computation of static and dynamical observables at finite temperature. Finally, I discuss the features of and roadmap for our DMRG + + codebase. Work done at CNMS, sponsored by the SUF Division, BES, U.S. DOE under contract with UT-Battelle. Support by the early career research program, DSUF, BES, DOE.
Two-leg ladder systems with dipole–dipole Fermion interactions
NASA Astrophysics Data System (ADS)
Mosadeq, Hamid; Asgari, Reza
2018-05-01
The ground-state phase diagram of a two-leg fermionic dipolar ladder with inter-site interactions is studied using density matrix renormalization group (DMRG) techniques. We use a state-of-the-art implementation of the DMRG algorithm and finite size scaling to simulate large system sizes with high accuracy. We also consider two different model systems and explore stable phases in half and quarter filling factors. We find that in the half filling, the charge and spin gaps emerge in a finite value of the dipole–dipole and on-site interactions. In the quarter filling case, s-wave superconducting state, charge density wave, homogenous insulating and phase separation phases occur depend on the interaction values. Moreover, in the dipole–dipole interaction, the D-Mott phase emerges when the hopping terms along the chain and rung are the same, whereas, this phase has been only proposed for the anisotropic Hubbard model. In the half filling case, on the other hand, there is either charge-density wave or charged Mott order phase depends on the orientation of the dipole moments of the particles with respect to the ladder geometry.
Ground states of linear rotor chains via the density matrix renormalization group
NASA Astrophysics Data System (ADS)
Iouchtchenko, Dmitri; Roy, Pierre-Nicholas
2018-04-01
In recent years, experimental techniques have enabled the creation of ultracold optical lattices of molecules and endofullerene peapod nanomolecular assemblies. It was previously suggested that the rotor model resulting from the placement of dipolar linear rotors in one-dimensional lattices at low temperature has a transition between ordered and disordered phases. We use the density matrix renormalization group (DMRG) to compute ground states of chains of up to 100 rotors and provide further evidence of the phase transition in the form of a diverging entanglement entropy. We also propose two methods and present some first steps toward rotational spectra of such molecular assemblies using DMRG. The present work showcases the power of DMRG in this new context of interacting molecular rotors and opens the door to the study of fundamental questions regarding criticality in systems with continuous degrees of freedom.
Saitow, Masaaki; Kurashige, Yuki; Yanai, Takeshi
2013-07-28
We report development of the multireference configuration interaction (MRCI) method that can use active space scalable to much larger size references than has previously been possible. The recent development of the density matrix renormalization group (DMRG) method in multireference quantum chemistry offers the ability to describe static correlation in a large active space. The present MRCI method provides a critical correction to the DMRG reference by including high-level dynamic correlation through the CI treatment. When the DMRG and MRCI theories are combined (DMRG-MRCI), the full internal contraction of the reference in the MRCI ansatz, including contraction of semi-internal states, plays a central role. However, it is thought to involve formidable complexity because of the presence of the five-particle rank reduced-density matrix (RDM) in the Hamiltonian matrix elements. To address this complexity, we express the Hamiltonian matrix using commutators, which allows the five-particle rank RDM to be canceled out without any approximation. Then we introduce an approximation to the four-particle rank RDM by using a cumulant reconstruction from lower-particle rank RDMs. A computer-aided approach is employed to derive the exceedingly complex equations of the MRCI in tensor-contracted form and to implement them into an efficient parallel computer code. This approach extends to the size-consistency-corrected variants of MRCI, such as the MRCI+Q, MR-ACPF, and MR-AQCC methods. We demonstrate the capability of the DMRG-MRCI method in several benchmark applications, including the evaluation of single-triplet gap of free-base porphyrin using 24 active orbitals.
Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo
2018-01-18
The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.
Generic construction of efficient matrix product operators
NASA Astrophysics Data System (ADS)
Hubig, C.; McCulloch, I. P.; Schollwöck, U.
2017-01-01
Matrix product operators (MPOs) are at the heart of the second-generation density matrix renormalization group (DMRG) algorithm formulated in matrix product state language. We first summarize the widely known facts on MPO arithmetic and representations of single-site operators. Second, we introduce three compression methods (rescaled SVD, deparallelization, and delinearization) for MPOs and show that it is possible to construct efficient representations of arbitrary operators using MPO arithmetic and compression. As examples, we construct powers of a short-ranged spin-chain Hamiltonian, a complicated Hamiltonian of a two-dimensional system and, as proof of principle, the long-range four-body Hamiltonian from quantum chemistry.
The ab-initio density matrix renormalization group in practice.
Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic
2015-01-21
The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.
NASA Astrophysics Data System (ADS)
Kawakami, Takashi; Sano, Shinsuke; Saito, Toru; Sharma, Sandeep; Shoji, Mitsuo; Yamada, Satoru; Takano, Yu; Yamanaka, Shusuke; Okumura, Mitsutaka; Nakajima, Takahito; Yamaguchi, Kizashi
2017-09-01
Theoretical examinations of the ferromagnetic coupling in the m-phenylene-bis-methylene molecule and its oligomer were carried out. These systems are good candidates for exchange-coupled systems to investigate strong electronic correlations. We studied effective exchange integrals (J), which indicated magnetic coupling between interacting spins in these species. First, theoretical calculations based on a broken-symmetry single-reference procedure, i.e. the UHF, UMP2, UMP4, UCCSD(T) and UB3LYP methods, were carried out with a GAUSSIAN program code under an SR wave function. From these results, the J value by the UHF method was largely positive because of the strong ferromagnetic spin polarisation effect. The J value by the UCCSD(T) and UB3LYP methods improved an overestimation problem by correcting the dynamical electronic correlation. Next, magnetic coupling among these spins was studied using the CAS-based method of the symmetry-adapted multireference methods procedure. Thus, the UNO DMRG CASCI (UNO, unrestricted natural orbital; DMRG, density matrix renormalised group; CASCI, complete active space configuration interaction) method was mainly employed with a combination of ORCA and BLOCK program codes. DMRG CASCI calculations in valence electron counting, which included all orbitals to full valence CI, provided the most reliable result, and support the UB3LYP method for extended systems.
DMRG study of fractional quantum Hall effect and valley skyrmions in graphene
NASA Astrophysics Data System (ADS)
Shibata, Naokazu
2011-12-01
The ground state and low-energy excitations of graphene and its bilayer are investigated by the density matrix renormalization group (DMRG) method. We analyze the effect of Coulomb interaction between the electrons including valley degrees of freedoms. The obtained results show finite charge excitation gap at various fractional fillings νn = 1/3, 2/5, 2/3 in the n = 0 and 1 Landau levels of single-layer graphene (SLG) and n = 2 Landau level of bilayer graphene (BLG). The lowest charge excitations at ν = 1/3, and 1 in SLG are valley skyrmions.
Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG
NASA Astrophysics Data System (ADS)
Stoudenmire, Edwin Miles; White, Steven
We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.
NASA Astrophysics Data System (ADS)
Bischoff, Jan-Moritz; Jeckelmann, Eric
2017-11-01
We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
One dimensionalization in the spin-1 Heisenberg model on the anisotropic triangular lattice
NASA Astrophysics Data System (ADS)
Gonzalez, M. G.; Ghioldi, E. A.; Gazza, C. J.; Manuel, L. O.; Trumper, A. E.
2017-11-01
We investigate the effect of dimensional crossover in the ground state of the antiferromagnetic spin-1 Heisenberg model on the anisotropic triangular lattice that interpolates between the regime of weakly coupled Haldane chains (J'≪J ) and the isotropic triangular lattice (J'=J ). We use the density-matrix renormalization group (DMRG) and Schwinger boson theory performed at the Gaussian correction level above the saddle-point solution. Our DMRG results show an abrupt transition between decoupled spin chains and the spirally ordered regime at (J'/J) c˜0.42 , signaled by the sudden closing of the spin gap. Coming from the magnetically ordered side, the computation of the spin stiffness within Schwinger boson theory predicts the instability of the spiral magnetic order toward a magnetically disordered phase with one-dimensional features at (J'/J) c˜0.43 . The agreement of these complementary methods, along with the strong difference found between the intra- and the interchain DMRG short spin-spin correlations for sufficiently large values of the interchain coupling, suggests that the interplay between the quantum fluctuations and the dimensional crossover effects gives rise to the one-dimensionalization phenomenon in this frustrated spin-1 Hamiltonian.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
Spin Andreev-like Reflection in Metal-Mott Insulator Heterostructures
Al-Hassanieh, K. A.; Rincón, Julián; Alvarez, G.; ...
2015-02-09
Here we used the time-dependent density-matrix renormalization group (tDMRG) to study the time evolution of electron wave packets in one-dimensional (1D) metal-superconductor heterostructures. The results show Andreev reflection at the interface, as expected. By combining these results with the well-known single- spin-species electron-hole transformation in the Hubbard model, we predict an analogous spin Andreev reflection in metal-Mott insulator heterostructures. This effect is numerically confirmed using 1D tDMRG, but it is expected to also be present in higher dimensions, as well as in more general Hamiltonians. We present an intuitive picture of the spin reflection, analogous to that of Andreev reflectionmore » at metal- superconductor interfaces. This allows us to discuss a novel antiferromagnetic proximity effect. Possible experimental realizations are discussed.« less
Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang
2016-10-11
We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.
Critical behavior of the extended Hubbard model with bond dimerization
NASA Astrophysics Data System (ADS)
Ejima, Satoshi; Lange, Florian; Essler, Fabian H. L.; Fehske, Holger
2018-05-01
Exploiting the matrix-product-state based density-matrix renormalization group (DMRG) technique we study the one-dimensional extended (U-V) Hubbard model with explicit bond dimerization in the half-filled band sector. In particular we investigate the nature of the quantum phase transition, taking place with growing ratio V / U between the symmetry-protected-topological and charge-density-wave insulating states. The (weak-coupling) critical line of continuous Ising transitions with central charge c = 1 / 2 terminates at a tricritical point belonging to the universality class of the dilute Ising model with c = 7 / 10 . We demonstrate that our DMRG data perfectly match with (tricritical) Ising exponents, e.g., for the order parameter β = 1 / 8 (1/24) and correlation length ν = 1 (5/9). Beyond the tricritical Ising point, in the strong-coupling regime, the quantum phase transition becomes first order.
Ran, Shi-Ju
2016-05-01
In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.
Density matrix renormalization group study of Y-junction spin systems
NASA Astrophysics Data System (ADS)
Guo, Haihui
Junction systems are important to understand both from the fundamental and the practical point of view, as they are essential components in existing and future electronic and spintronic devices. With the continuous advance of technology, device size will eventual reach the atomic scale. Some of the most interesting and useful junction systems will be strongly correlated. We chose the Density Matrix Renormalization Group method to study two types of Y-junction systems, the Y and YDelta junctions, on strongly correlated spin chains. With new ideas coming from the quantum information field, we have made a very efficient. Y-junction DMRG algorithm, which improves the overall CUB cost from O(m6) to O(m4), where m is the number of states kept per block. We studied the ground state properties, the correlation length, and investigated the degeneracy problem on the Y and YDelta junctions. For the excited states, we researched the existence of magnon bound states for various conditions, and have shown that the bound state exists when the central coupling constant is small.
Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition
NASA Astrophysics Data System (ADS)
Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa
2012-02-01
We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007
Absence of Long-Range Order in a Triangular Spin System with Dipolar Interactions
NASA Astrophysics Data System (ADS)
Keleş, Ahmet; Zhao, Erhai
2018-05-01
The antiferromagnetic Heisenberg model on the triangular lattice is perhaps the best known example of frustrated magnets, but it orders at low temperatures. Recent density matrix renormalization group (DMRG) calculations find that the next nearest neighbor interaction J2 enhances the frustration, and it leads to a spin liquid for J2/J1∈(0.08 ,0.15 ). In addition, a DMRG study of a dipolar Heisenberg model with longer range interactions gives evidence for a spin liquid at a small dipole tilting angle θ ∈[0 ,1 0 ° ). In both cases, the putative spin liquid region appears to be small. Here, we show that for the triangular lattice dipolar Heisenberg model, a robust quantum paramagnetic phase exists in a surprisingly wide region, θ ∈[0 ,5 4 ° ) , for dipoles tilted along the lattice diagonal direction. We obtain the phase diagram of the model by functional renormalization group (RG), which treats all magnetic instabilities on equal footing. The quantum paramagnetic phase is characterized by a smooth continuous flow of vertex functions and spin susceptibility down to the lowest RG scale, in contrast to the apparent breakdown of RG flow in phases with stripe or spiral order. Our finding points to a promising direction to search for quantum spin liquids in ultracold dipolar molecules.
NASA Astrophysics Data System (ADS)
Chien, Chih-Chun; Gruss, Daniel; Di Ventra, Massimiliano; Zwolak, Michael
2013-06-01
The study of time-dependent, many-body transport phenomena is increasingly within reach of ultra-cold atom experiments. We show that the introduction of spatially inhomogeneous interactions, e.g., generated by optically controlled collisions, induce negative differential conductance in the transport of atoms in one-dimensional optical lattices. Specifically, we simulate the dynamics of interacting fermionic atoms via a micro-canonical transport formalism within both a mean-field and a higher-order approximation, as well as with a time-dependent density-matrix renormalization group (DMRG). For weakly repulsive interactions, a quasi-steady-state atomic current develops that is similar to the situation occurring for electronic systems subject to an external voltage bias. At the mean-field level, we find that this atomic current is robust against the details of how the interaction is switched on. Further, a conducting-non-conducting transition exists when the interaction imbalance exceeds some threshold from both our approximate and time-dependent DMRG simulations. This transition is preceded by the atomic equivalent of negative differential conductivity observed in transport across solid-state structures.
Orbital currents in a generalized Hubbard ladder
NASA Astrophysics Data System (ADS)
Fjaerestad, John O.
2004-03-01
We study a phase with orbital currents (d-density wave (DDW)/staggered flux phase) in a generalized Hubbard model on the two-leg ladder at zero temperature. Bosonization and perturbative renormalization-group calculations are used to identify a parameter region with long-range DDW order in the weakly interacting half-filled ladder. Finite-size density-matrix renormalization-group (DMRG) studies of ladders with up to 200 rungs, for rational hole dopings δ and intermediate-strength interactions, find that currents remain large in the doped DDW phase, with no evidence of decay.^1,2,3 Motivated by these results, we consider an effective bosonization description of the doped DDW phase in which quantum fluctuations in the total charge mode are neglected.^3 This leads to an analytically solvable Frenkel-Kontorova-like model which predicts that the staggered rung current and the rung electron density show periodic spatial oscillations with wavelengths 2/δ and 1/δ, respectively, with the density minima located at the zeros (domain walls) of the staggered rung current, in good agreement with the DMRG results. We comment on the question of the nature of the asymptotic current correlations in the doped DDW phase. ^1U. Schollwöck, S. Chakravarty, J. O. Fjaerestad, J. B. Marston, and M. Troyer, Phys. Rev. Lett. 90, 186401 (2003). ^2M. Troyer, invited talk at this meeting. ^3J. O. Fjaerestad, J. B. Marston, and U. Schollwöck, unpublished.
Quantum bright solitons in a quasi-one-dimensional optical lattice
NASA Astrophysics Data System (ADS)
Barbiero, Luca; Salasnich, Luca
2014-06-01
We study a quasi-one-dimensional attractive Bose gas confined in an optical lattice with a superimposed harmonic potential by analyzing the one-dimensional Bose-Hubbard Hamiltonian of the system. Starting from the three-dimensional many-body quantum Hamiltonian, we derive strong inequalities involving the transverse degrees of freedom under which the one-dimensional Bose-Hubbard Hamiltonian can be safely used. To have a reliable description of the one-dimensional ground state, which we call a quantum bright soliton, we use the density-matrix-renormalization-group (DMRG) technique. By comparing DMRG results with mean-field (MF) ones, we find that beyond-mean-field effects become relevant by increasing the attraction between bosons or by decreasing the frequency of the harmonic confinement. In particular, we find that, contrary to the MF predictions based on the discrete nonlinear Schrödinger equation, average density profiles of quantum bright solitons are not shape-invariant. We also use the time-evolving-block-decimation method to investigate the dynamical properties of bright solitons when the frequency of the harmonic potential is suddenly increased. This quantum quench induces a breathing mode whose period crucially depends on the final strength of the superimposed harmonic confinement.
Spin-Projected Matrix Product States: Versatile Tool for Strongly Correlated Systems.
Li, Zhendong; Chan, Garnet Kin-Lic
2017-06-13
We present a new wave function ansatz that combines the strengths of spin projection with the language of matrix product states (MPS) and matrix product operators (MPO) as used in the density matrix renormalization group (DMRG). Specifically, spin-projected matrix product states (SP-MPS) are constructed as [Formula: see text], where [Formula: see text] is the spin projector for total spin S and |Ψ MPS (N,M) ⟩ is an MPS wave function with a given particle number N and spin projection M. This new ansatz possesses several attractive features: (1) It provides a much simpler route to achieve spin adaptation (i.e., to create eigenfunctions of Ŝ 2 ) compared to explicitly incorporating the non-Abelian SU(2) symmetry into the MPS. In particular, since the underlying state |Ψ MPS (N,M) ⟩ in the SP-MPS uses only Abelian symmetries, one does not need the singlet embedding scheme for nonsinglet states, as normally employed in spin-adapted DMRG, to achieve a single consistent variationally optimized state. (2) Due to the use of |Ψ MPS (N,M) ⟩ as its underlying state, the SP-MPS can be closely connected to broken-symmetry mean-field states. This allows one to straightforwardly generate the large number of broken-symmetry guesses needed to explore complex electronic landscapes in magnetic systems. Further, this connection can be exploited in the future development of quantum embedding theories for open-shell systems. (3) The sum of MPOs representation for the Hamiltonian and spin projector [Formula: see text] naturally leads to an embarrassingly parallel algorithm for computing expectation values and optimizing SP-MPS. (4) Optimizing SP-MPS belongs to the variation-after-projection (VAP) class of spin-projected theories. Unlike usual spin-projected theories based on determinants, the SP-MPS ansatz can be made essentially exact simply by increasing the bond dimensions in |Ψ MPS (N,M) ⟩. Computing excited states is also simple by imposing orthogonality constraints, which are simple to implement with MPS. To illustrate the versatility of SP-MPS, we formulate algorithms for the optimization of ground and excited states, develop perturbation theory based on SP-MPS, and describe how to evaluate spin-independent and spin-dependent properties such as the reduced density matrices. We demonstrate the numerical performance of SP-MPS with applications to several models typical of strong correlation, including the Hubbard model, and [2Fe-2S] and [4Fe-4S] model complexes.
A minimal model for the structural energetics of VO2
NASA Astrophysics Data System (ADS)
Kim, Chanul; Marianetti, Chris; The Marianetti Group Team
Resolving the structural, magnetic, and electronic structure of VO2 from the first-principles of quantum mechanics is still a forefront problem despite decades of attention. Hybrid functionals have been shown to qualitatively ruin the structural energetics. While density functional theory (DFT) combined with cluster extensions of dynamical mean-field theory (DMFT) have demonstrated promising results in terms of the electronic properties, structural phase stability has not yet been addressed. In order to capture the basic physics of the structural transition, we propose a minimal model of VO2 based on the one dimensional Peierls-Hubbard model and parameterize this based on DFT calculations of VO2. The total energy versus dimerization in the minimal mode is then solved numerically exactly using density matrix renormalization group (DMRG) and compared to the Hartree-Fock solution. We demonstrate that the Hartree-Fock solution exhibits the same pathologies as DFT+U, and spin density functional theory for that matter, while the DMRG solution is consistent with experimental observation. Our results demonstrate the critical role of non-locality in the total energy, and this will need to be accounted for to obtain a complete description of VO2 from first-principles. The authors acknowledge support from FAME, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA.
Zhu, Wei; Sheng, D. N.; Zhu, Jian -Xin
2017-08-14
Here, we study the magnetic field-driven metal-to-insulator transition in half-filled Hubbard model on the Bethe lattice, using the dynamical mean-field theory by solving the quantum impurity problem with density-matrix renormalization group algorithm. The method enables us to obtain a high-resolution spectral densities in the presence of a magnetic field. It is found that the Kondo resonance at the Fermi level splits at relatively high magnetic field: the spin-up and -down components move away from the Fermi level and finally form a spin-polarized band insulator. By calculating the magnetization and spin susceptibility, we clarify that an applied magnetic field drives amore » transition from a paramagnetic metallic phase to a band insulating phase. In the weak interaction regime, the nature of the transition is continuous and captured by the Stoner's description, while in the strong interaction regime the transition is very likely to be metamagnetic, evidenced by the hysteresis curve. Furthermore, we determine the phase boundary by tracking the kink in the magnetic susceptibility, and the steplike change of the entanglement entropy and the entanglement gap closing. Interestingly, the phase boundaries determined from these two different ways are largely consistent with each other.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Sheng, D. N.; Zhu, Jian -Xin
Here, we study the magnetic field-driven metal-to-insulator transition in half-filled Hubbard model on the Bethe lattice, using the dynamical mean-field theory by solving the quantum impurity problem with density-matrix renormalization group algorithm. The method enables us to obtain a high-resolution spectral densities in the presence of a magnetic field. It is found that the Kondo resonance at the Fermi level splits at relatively high magnetic field: the spin-up and -down components move away from the Fermi level and finally form a spin-polarized band insulator. By calculating the magnetization and spin susceptibility, we clarify that an applied magnetic field drives amore » transition from a paramagnetic metallic phase to a band insulating phase. In the weak interaction regime, the nature of the transition is continuous and captured by the Stoner's description, while in the strong interaction regime the transition is very likely to be metamagnetic, evidenced by the hysteresis curve. Furthermore, we determine the phase boundary by tracking the kink in the magnetic susceptibility, and the steplike change of the entanglement entropy and the entanglement gap closing. Interestingly, the phase boundaries determined from these two different ways are largely consistent with each other.« less
Kurashige, Yuki; Saitow, Masaaki; Chalupský, Jakub; Yanai, Takeshi
2014-06-28
The O-O (oxygen-oxygen) bond formation is widely recognized as a key step of the catalytic reaction of dioxygen evolution from water. Recently, the water oxidation catalyzed by potassium ferrate (K2FeO4) was investigated on the basis of experimental kinetic isotope effect analysis assisted by density functional calculations, revealing the intramolecular oxo-coupling mechanism within a di-iron(vi) intermediate, or diferrate [Sarma et al., J. Am. Chem. Soc., 2012, 134, 15371]. Here, we report a detailed examination of this diferrate-mediated O-O bond formation using scalable multireference electronic structure theory. High-dimensional correlated many-electron wave functions beyond the one-electron picture were computed using the ab initio density matrix renormalization group (DMRG) method along the O-O bond formation pathway. The necessity of using large active space arises from the description of complex electronic interactions and varying redox states both associated with two-center antiferromagnetic multivalent iron-oxo coupling. Dynamic correlation effects on top of the active space DMRG wave functions were additively accounted for by complete active space second-order perturbation (CASPT2) and multireference configuration interaction (MRCI) based methods, which were recently introduced by our group. These multireference methods were capable of handling the double shell effects in the extended active space treatment. The calculations with an active space of 36 electrons in 32 orbitals, which is far over conventional limitation, provide a quantitatively reliable prediction of potential energy profiles and confirmed the viability of the direct oxo coupling. The bonding nature of Fe-O and dual bonding character of O-O are discussed using natural orbitals.
NASA Astrophysics Data System (ADS)
Wall, Michael
2014-03-01
Experimental progress in generating and manipulating synthetic quantum systems, such as ultracold atoms and molecules in optical lattices, has revolutionized our understanding of quantum many-body phenomena and posed new challenges for modern numerical techniques. Ultracold molecules, in particular, feature long-range dipole-dipole interactions and a complex and selectively accessible internal structure of rotational and hyperfine states, leading to many-body models with long range interactions and many internal degrees of freedom. Additionally, the many-body physics of ultracold molecules is often probed far from equilibrium, and so algorithms which simulate quantum many-body dynamics are essential. Numerical methods which are to have significant impact in the design and understanding of such synthetic quantum materials must be able to adapt to a variety of different interactions, physical degrees of freedom, and out-of-equilibrium dynamical protocols. Matrix product state (MPS)-based methods, such as the density-matrix renormalization group (DMRG), have become the de facto standard for strongly interacting low-dimensional systems. Moreover, the flexibility of MPS-based methods makes them ideally suited both to generic, open source implementation as well as to studies of the quantum many-body dynamics of ultracold molecules. After introducing MPSs and variational algorithms using MPSs generally, I will discuss my own research using MPSs for many-body dynamics of long-range interacting systems. In addition, I will describe two open source implementations of MPS-based algorithms in which I was involved, as well as educational materials designed to help undergraduates and graduates perform research in computational quantum many-body physics using a variety of numerical methods including exact diagonalization and static and dynamic variational MPS methods. Finally, I will mention present research on ultracold molecules in optical lattices, such as the exploration of many-body physics with polyatomic molecules, and the next generation of open source matrix product state codes. This work was performed in the research group of Prof. Lincoln D. Carr.
NMR relaxation rate in quasi one-dimensional antiferromagnets
NASA Astrophysics Data System (ADS)
Capponi, Sylvain; Dupont, Maxime; Laflorencie, Nicolas; Sengupta, Pinaki; Shao, Hui; Sandvik, Anders W.
We compare results of different numerical approaches to compute the NMR relaxation rate 1 /T1 in quasi one-dimensional (1d) antiferromagnets. In the purely 1d regime, recent numerical simulations using DMRG have provided the full crossover behavior from classical regime at high temperature to universal Tomonaga-Luttinger liquid at low-energy (in the gapless case) or activated behavior (in the gapped case). For quasi 1d models, we can use mean-field approaches to reduce the problem to a 1d one that can be studied using DMRG. But in some cases, we can also simulate the full microscopic model using quantum Monte-Carlo techniques. This allows to compute dynamical correlations in imaginary time and we will discuss recent advances to perform stochastic analytic continuation to get real frequency spectra. Finally, we connect our results to experiments on various quasi 1d materials.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
NASA Astrophysics Data System (ADS)
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Spin Bose-metal phase in a spin- (1)/(2) model with ring exchange on a two-leg triangular strip
NASA Astrophysics Data System (ADS)
Sheng, D. N.; Motrunich, Olexei I.; Fisher, Matthew P. A.
2009-05-01
Recent experiments on triangular lattice organic Mott insulators have found evidence for a two-dimensional (2D) spin liquid in close proximity to the metal-insulator transition. A Gutzwiller wave function study of the triangular lattice Heisenberg model with a four-spin ring exchange term appropriate in this regime has found that the projected spinon Fermi sea state has a low variational energy. This wave function, together with a slave particle-gauge theory analysis, suggests that this putative spin liquid possesses spin correlations that are singular along surfaces in momentum space, i.e., “Bose surfaces.” Signatures of this state, which we will refer to as a “spin Bose metal” (SBM), are expected to manifest in quasi-one-dimensional (quasi-1D) ladder systems: the discrete transverse momenta cut through the 2D Bose surface leading to a distinct pattern of 1D gapless modes. Here, we search for a quasi-1D descendant of the triangular lattice SBM state by exploring the Heisenberg plus ring model on a two-leg triangular strip (zigzag chain). Using density matrix renormalization group (DMRG) supplemented by variational wave functions and a bosonization analysis, we map out the full phase diagram. In the absence of ring exchange the model is equivalent to the J1-J2 Heisenberg chain, and we find the expected Bethe-chain and dimerized phases. Remarkably, moderate ring exchange reveals a new gapless phase over a large swath of the phase diagram. Spin and dimer correlations possess singular wave vectors at particular “Bose points” (remnants of the 2D Bose surface) and allow us to identify this phase as the hoped for quasi-1D descendant of the triangular lattice SBM state. We use bosonization to derive a low-energy effective theory for the zigzag spin Bose metal and find three gapless modes and one Luttinger parameter controlling all power law correlations. Potential instabilities out of the zigzag SBM give rise to other interesting phases such as a period-3 valence bond solid or a period-4 chirality order, which we discover in the DMRG. Another interesting instability is into a spin Bose-metal phase with partial ferromagnetism (spin polarization of one spinon band), which we also find numerically using the DMRG.
Theoretical and computational studies of excitons in conjugated polymers
NASA Astrophysics Data System (ADS)
Barford, William; Bursill, Robert J.; Smith, Richard W.
2002-09-01
We present a theoretical and computational analysis of excitons in conjugated polymers. We use a tight-binding model of π-conjugated electrons, with 1/r interactions for large r. In both the weak-coupling limit (defined by W>>U) and the strong-coupling limit (defined by W<
Theory of optical transitions in π-conjugated macrocycles
NASA Astrophysics Data System (ADS)
Marcus, Max; Coonjobeeharry, Jaymee; Barford, William
2016-04-01
We describe a theoretical and computational investigation of the optical properties of π-conjugated macrocycles. Since the low-energy excitations of these systems are Frenkel excitons that couple to high-frequency dispersionless phonons, we employ the quantized Frenkel-Holstein model and solve it via the density matrix renormalization group (DMRG) method. First we consider optical emission from perfectly circular systems. Owing to optical selection rules, such systems radiate via two mechanisms: (i) within the Condon approximation, by thermally induced emission from the optically allowed j = ± 1 states and (ii) beyond the Condon approximation, by emission from the j = 0 state via coupling with a totally non-symmetric phonon (namely, the Herzberg-Teller effect). Using perturbation theory, we derive an expression for the Herzberg-Teller correction and show via DMRG calculations that this expression soon fails as ħ ω/J and the size of the macrocycle increase. Next, we consider the role of broken symmetry caused by torsional disorder. In this case the quantum number j no longer labels eigenstates of angular momentum, but instead labels localized local exciton groundstates (LEGSs) or quasi-extended states (QEESs). As for linear polymers, LEGSs define chromophores, with the higher energy QEESs being extended over numerous LEGSs. Within the Condon approximation (i.e., neglecting the Herzberg-Teller correction) we show that increased disorder increases the emissive optical intensity, because all the LEGSs are optically active. We next consider the combined role of broken symmetry and curvature, by explicitly evaluating the Herzberg-Teller correction in disordered systems via the DMRG method. The Herzberg-Teller correction is most evident in the emission intensity ratio, I00/I01. In the Condon approximation I00/I01 is a constant function of curvature, whereas in practice it vanishes for closed rings and only approaches a constant in the limit of vanishing curvature. We calculate the optical spectra of a model system, cyclo-poly(para-phenylene ethynylene), for different amounts of torsional disorder within and beyond the Condon approximation. We show how broken symmetry and the Herzberg-Teller effect explain the spectral features. The Herzberg-Teller correction to the 0-1 emission vibronic peak is always significant. Finally, we note the qualitative similarities between the optical properties of conformationally disordered linear polymers and macrocycles in the limit of sufficiently large disorder, because in both cases they are determined by the optical properties of curved chromophores.
Numerical simulations of strongly correlated electron and spin systems
NASA Astrophysics Data System (ADS)
Changlani, Hitesh Jaiprakash
Developing analytical and numerical tools for strongly correlated systems is a central challenge for the condensed matter physics community. In the absence of exact solutions and controlled analytical approximations, numerical techniques have often contributed to our understanding of these systems. Exact Diagonalization (ED) requires the storage of at least two vectors the size of the Hilbert space under consideration (which grows exponentially with system size) which makes it affordable only for small systems. The Density Matrix Renormalization Group (DMRG) uses an intelligent Hilbert space truncation procedure to significantly reduce this cost, but in its present formulation is limited to quasi-1D systems. Quantum Monte Carlo (QMC) maps the Schrodinger equation to the diffusion equation (in imaginary time) and only samples the eigenvector over time, thereby avoiding the memory limitation. However, the stochasticity involved in the method gives rise to the "sign problem" characteristic of fermion and frustrated spin systems. The first part of this thesis is an effort to make progress in the development of a numerical technique which overcomes the above mentioned problems. We consider novel variational wavefunctions, christened "Correlator Product States" (CPS), that have a general functional form which hopes to capture essential correlations in the ground states of spin and fermion systems in any dimension. We also consider a recent proposal to modify projector (Green's Function) Quantum Monte Carlo to ameliorate the sign problem for realistic and model Hamiltonians (such as the Hubbard model). This exploration led to our own set of improvements, primarily a semistochastic formulation of projector Quantum Monte Carlo. Despite their limitations, existing numerical techniques can yield physical insights into a wide variety of problems. The second part of this thesis considers one such numerical technique - DMRG - and adapts it to study the Heisenberg antiferromagnet on a generic tree graph. Our attention turns to a systematic numerical and semi-analytical study of the effect of local even/odd sublattice imbalance on the low energy spectrum of antiferromagnets on regular Cayley trees. Finally, motivated by previous experiments and theories of randomly diluted antiferromagnets (where an even/odd sublattice imbalance naturally occurs), we present our study of the Heisenberg antiferromagnet on the Cayley tree at the percolation threshold. Our work shows how to detect "emergent" low energy degrees of freedom and compute the effective interactions between them by using data from DMRG calculations.
Yanai, Takeshi; Kurashige, Yuki; Neuscamman, Eric; Chan, Garnet Kin-Lic
2010-01-14
We describe the joint application of the density matrix renormalization group and canonical transformation theory to multireference quantum chemistry. The density matrix renormalization group provides the ability to describe static correlation in large active spaces, while the canonical transformation theory provides a high-order description of the dynamic correlation effects. We demonstrate the joint theory in two benchmark systems designed to test the dynamic and static correlation capabilities of the methods, namely, (i) total correlation energies in long polyenes and (ii) the isomerization curve of the [Cu(2)O(2)](2+) core. The largest complete active spaces and atomic orbital basis sets treated by the joint DMRG-CT theory in these systems correspond to a (24e,24o) active space and 268 atomic orbitals in the polyenes and a (28e,32o) active space and 278 atomic orbitals in [Cu(2)O(2)](2+).
DMRG study of the Kagome Antiferromagnetic Heisenberg Model
NASA Astrophysics Data System (ADS)
Yan, Simeng; White, Steven
2010-03-01
We have used DMRG to study the S=1/2 Heisenberg model on the Kagome lattice, using cylindrical boundary conditions and large clusters. We have focused on the spin gap and the presence or absence of the Valence Bond Crystal (VBC) order with a 36 unit cell as studied by Marston and Zeng, Singh and Huse, and others. Our results are probably the highest accuracy results for large clusters to date. Our extrapolated results find a finite spin gap with a value of about 0.05 J. To determine whether VBC order occurs, we calculated the ground states of a variety of clusters, some of which allow the 36 site VBC order, and others which do not allow it. For narrower cylinders (width < 12) , the VBC patterns are found to vanish as the number of kept states increases. For wider systems, we do observe VBC ground states, but it is not always clear that the calculations have converged. The extrapolated energies of the two types of states are very close, within about 1%.
Staggered Orbital Currents in the Half-Filled Two-Leg Ladder
NASA Astrophysics Data System (ADS)
Fjaerestad, J. O.; Marston, Brad; Sudbo, A.
2002-03-01
We present strong analytical and numerical evidence for the existence of a staggered flux (SF) phase in the half-filled two-leg ladder, with true long-range order in the counter-circulating currents. Using abelian bosonization with a careful treatment of the Klein factors, we show that a certain phase of the half-filled ladder, previously identified as having spin-Peierls order, instead exhibits staggered orbital currents with no dimerization.(J. O. Fjærestad and J. B. Marston, cond- mat/0107094.) This result, combined with a weak-coupling renormalization-group analysis, implies that the SF phase exists in a region of the phase diagram of the half-filled t-U-V-J ladder. Using the density-matrix renormalization-group (DMRG) approach generalized to complex-valued wavefunctions, we demonstrate that the SF phase exhibits robust currents at intermediate values of the interaction strengths.
Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-01-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049
Quantum Dynamics of Solitons in Strongly Interacting Systems on Optical Lattices
NASA Astrophysics Data System (ADS)
Rubbo, Chester; Balakrishnan, Radha; Reinhardt, William; Satija, Indubala; Rey, Ana; Manmana, Salvatore
2012-06-01
We present results of the quantum dynamics of solitons in XXZ spin-1/2 systems which in general can be derived from a system of spinless fermions or hard-core bosons (HCB) with nearest neighbor interaction on a lattice. A mean-field treatment using spin-coherent states revealed analytic solutions of both bright and dark solitons [1]. We take these solutions and apply a full quantum evolution using the adaptive time-dependent density matrix renormalization group method (adaptive t-DMRG), which takes into account the effect of strong correlations. We use local spin observables, correlations functions, and entanglement entropies as measures for the stability of these soliton solutions over the simulation times. [4pt] [1] R. Balakrishnan, I.I. Satija, and C.W. Clark, Phys. Rev. Lett. 103, 230403 (2009).
Selection of active spaces for multiconfigurational wavefunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Sebastian; Boguslawski, Katharina; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch
2015-06-28
The efficient and accurate description of the electronic structure of strongly correlated systems is still a largely unsolved problem. The usual procedures start with a multiconfigurational (usually a Complete Active Space, CAS) wavefunction which accounts for static correlation and add dynamical correlation by perturbation theory, configuration interaction, or coupled cluster expansion. This procedure requires the correct selection of the active space. Intuitive methods are unreliable for complex systems. The inexpensive black-box unrestricted natural orbital (UNO) criterion postulates that the Unrestricted Hartree-Fock (UHF) charge natural orbitals with fractional occupancy (e.g., between 0.02 and 1.98) constitute the active space. UNOs generally approximatemore » the CAS orbitals so well that the orbital optimization in CAS Self-Consistent Field (CASSCF) may be omitted, resulting in the inexpensive UNO-CAS method. A rigorous testing of the UNO criterion requires comparison with approximate full configuration interaction wavefunctions. This became feasible with the advent of Density Matrix Renormalization Group (DMRG) methods which can approximate highly correlated wavefunctions at affordable cost. We have compared active orbital occupancies in UNO-CAS and CASSCF calculations with DMRG in a number of strongly correlated molecules: compounds of electronegative atoms (F{sub 2}, ozone, and NO{sub 2}), polyenes, aromatic molecules (naphthalene, azulene, anthracene, and nitrobenzene), radicals (phenoxy and benzyl), diradicals (o-, m-, and p-benzyne), and transition metal compounds (nickel-acetylene and Cr{sub 2}). The UNO criterion works well in these cases. Other symmetry breaking solutions, with the possible exception of spatial symmetry, do not appear to be essential to generate the correct active space. In the case of multiple UHF solutions, the natural orbitals of the average UHF density should be used. The problems of the UNO criterion and their potential solutions are discussed: finding the UHF solutions, discontinuities on potential energy surfaces, and inclusion of dynamical electron correlation and generalization to excited states.« less
Selection of active spaces for multiconfigurational wavefunctions
NASA Astrophysics Data System (ADS)
Keller, Sebastian; Boguslawski, Katharina; Janowski, Tomasz; Reiher, Markus; Pulay, Peter
2015-06-01
The efficient and accurate description of the electronic structure of strongly correlated systems is still a largely unsolved problem. The usual procedures start with a multiconfigurational (usually a Complete Active Space, CAS) wavefunction which accounts for static correlation and add dynamical correlation by perturbation theory, configuration interaction, or coupled cluster expansion. This procedure requires the correct selection of the active space. Intuitive methods are unreliable for complex systems. The inexpensive black-box unrestricted natural orbital (UNO) criterion postulates that the Unrestricted Hartree-Fock (UHF) charge natural orbitals with fractional occupancy (e.g., between 0.02 and 1.98) constitute the active space. UNOs generally approximate the CAS orbitals so well that the orbital optimization in CAS Self-Consistent Field (CASSCF) may be omitted, resulting in the inexpensive UNO-CAS method. A rigorous testing of the UNO criterion requires comparison with approximate full configuration interaction wavefunctions. This became feasible with the advent of Density Matrix Renormalization Group (DMRG) methods which can approximate highly correlated wavefunctions at affordable cost. We have compared active orbital occupancies in UNO-CAS and CASSCF calculations with DMRG in a number of strongly correlated molecules: compounds of electronegative atoms (F2, ozone, and NO2), polyenes, aromatic molecules (naphthalene, azulene, anthracene, and nitrobenzene), radicals (phenoxy and benzyl), diradicals (o-, m-, and p-benzyne), and transition metal compounds (nickel-acetylene and Cr2). The UNO criterion works well in these cases. Other symmetry breaking solutions, with the possible exception of spatial symmetry, do not appear to be essential to generate the correct active space. In the case of multiple UHF solutions, the natural orbitals of the average UHF density should be used. The problems of the UNO criterion and their potential solutions are discussed: finding the UHF solutions, discontinuities on potential energy surfaces, and inclusion of dynamical electron correlation and generalization to excited states.
Stripe order from the perspective of the Hubbard model
Huang, Edwin W.; Mendl, Christian B.; Jiang, Hong-Chen; ...
2018-04-20
A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including themore » often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.« less
Magnetoelectric effects in the spin-1/2 XXZ model with Dzyaloshinskii-Moriya interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Pradeep; Durganandini, P., E-mail: pdn@physics.unipune.ac.in
2015-06-24
We study the 1D spin-1/2 XXZ chain in the presence of the Dzyaloshinskii-Moriya (D-M) interaction and with longitudinal and transverse magnetic fields. We assume the spin-current mechanism of Katsura-Nagaosa-Balatsky at play and interpret the D-M interaction as a coupling between the local electric polarization and an external electric field. We study the interplay of electric and magnetic order in the ground state using the numerical density matrix renormalization group(DMRG) method. Specifically, we investigate the dependences of the magnetization and electric polarization on the external electric and magnetic fields. We find that for transverse magnetic fields, there are two different regimesmore » of polarization while for longitudinal magnetic fields, there are three different regimes of polarization. The different regimes can be tuned by the external magnetic fields.« less
Pairing versus phase coherence of doped holes in distinct quantum spin backgrounds
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Sheng, D. N.; Weng, Zheng-Yu
2018-03-01
We examine the pairing structure of holes injected into two distinct spin backgrounds: a short-range antiferromagnetic phase versus a symmetry protected topological phase. Based on density matrix renormalization group (DMRG) simulation, we find that although there is a strong binding between two holes in both phases, phase fluctuations can significantly influence the pair-pair correlation depending on the spin-spin correlation in the background. Here the phase fluctuation is identified as an intrinsic string operator nonlocally controlled by the spins. We show that while the pairing amplitude is generally large, the coherent Cooper pairing can be substantially weakened by the phase fluctuation in the symmetry-protected topological phase, in contrast to the short-range antiferromagnetic phase. It provides an example of a non-BCS mechanism for pairing, in which the paring phase coherence is determined by the underlying spin state self-consistently, bearing an interesting resemblance to the pseudogap physics in the cuprate.
NASA Astrophysics Data System (ADS)
Varjas, Daniel; Zaletel, Michael; Moore, Joel
2014-03-01
We use bosonic field theories and the infinite system density matrix renormalization group (iDMRG) method to study infinite strips of fractional quantum Hall (FQH) states starting from microscopic Hamiltonians. Finite-entanglement scaling allows us to accurately measure chiral central charge, edge mode exponents and momenta without finite-size errors. We analyze states in the first and second level of the standard hierarchy and compare our results to predictions of the chiral Luttinger liquid (χLL) theory. The results confirm the universality of scaling exponents in chiral edges and demonstrate that renormalization is subject to universal relations in the non-chiral case. We prove a generalized Luttinger's theorem involving all singularities in the momentum-resolved density, which naturally arises when mapping Landau levels on a cylinder to a fermion chain and deepens our understanding of non-Fermi liquids in 1D.
NASA Astrophysics Data System (ADS)
Yao, K. L.; Li, Y. C.; Sun, X. Z.; Liu, Q. M.; Qin, Y.; Fu, H. H.; Gao, G. Y.
2005-10-01
By using the density matrix renormalization group (DMRG) method for the one-dimensional (1D) Hubbard model, we have studied the von Neumann entropy of a quantum system, which describes the entanglement of the system block and the rest of the chain. It is found that there is a close relation between the entanglement entropy and properties of the system. The hole-doping can alter the charge charge and spin spin interactions, resulting in charge polarization along the chain. By comparing the results before and after the doping, we find that doping favors increase of the von Neumann entropy and thus also favors the exchange of information along the chain. Furthermore, we calculated the spin and entropy distribution in external magnetic filed. It is confirmed that both the charge charge and the spin spin interactions affect the exchange of information along the chain, making the entanglement entropy redistribute.
Emergence of chiral spin liquids via quantum melting of noncoplanar magnetic orders
Hickey, Ciarán; Cincio, Lukasz; Papić, Zlatko; ...
2017-09-11
Quantum spin liquids (QSLs) are highly entangled states of quantum magnets which lie beyond the Landau paradigm of classifying phases of matter via broken symmetries. A physical route to arriving at QSLs is via frustration-induced quantum melting of ordered states such as valence bond crystals or magnetic orders. Using extensive exact diagonalization (ED) and density-matrix renormalization group (DMRG)we show studies of concrete S U ( 2 ) invariant spin models on honeycomb, triangular, and square lattices, that chiral spin liquids (CSLs) emerge as descendants of triple- Q spin crystals with tetrahedral magnetic order and a large scalar spin chirality. Suchmore » ordered-to-CSL melting transitions may yield lattice realizations of effective Chern-Simons-Higgs field theories. We provides a distinct unifying perspective on the emergence of CSLs and suggests that materials with certain noncoplanar magnetic orders might provide a good starting point to search for CSLs.« less
Stripe order from the perspective of the Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Edwin W.; Mendl, Christian B.; Jiang, Hong-Chen
A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including themore » often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.« less
Relaxation of photoexcitations in polaron-induced magnetic microstructures
NASA Astrophysics Data System (ADS)
Köhler, Thomas; Rajpurohit, Sangeeta; Schumann, Ole; Paeckel, Sebastian; Biebl, Fabian R. A.; Sotoudeh, Mohsen; Kramer, Stephan C.; Blöchl, Peter E.; Kehrein, Stefan; Manmana, Salvatore R.
2018-06-01
We investigate the evolution of a photoexcitation in correlated materials over a wide range of time scales. The system studied is a one-dimensional model of a manganite with correlated electron, spin, orbital, and lattice degrees of freedom, which we relate to the three-dimensional material Pr1 -xCaxMnO3 . The ground-state phases for the entire composition range are determined and rationalized by a coarse-grained polaron model. At half doping a pattern of antiferromagnetically coupled Zener polarons is realized. Using time-dependent density-matrix renormalization group (tDMRG), we treat the electronic quantum dynamics following the excitation. The emergence of quasiparticles is addressed, and the relaxation of the nonequilibrium quasiparticle distribution is investigated via a linearized quantum-Boltzmann equation. Our approach shows that the magnetic microstructure caused by the Zener polarons leads to an increase of the relaxation times of the excitation.
Emergence of chiral spin liquids via quantum melting of noncoplanar magnetic orders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hickey, Ciarán; Cincio, Lukasz; Papić, Zlatko
Quantum spin liquids (QSLs) are highly entangled states of quantum magnets which lie beyond the Landau paradigm of classifying phases of matter via broken symmetries. A physical route to arriving at QSLs is via frustration-induced quantum melting of ordered states such as valence bond crystals or magnetic orders. Using extensive exact diagonalization (ED) and density-matrix renormalization group (DMRG)we show studies of concrete S U ( 2 ) invariant spin models on honeycomb, triangular, and square lattices, that chiral spin liquids (CSLs) emerge as descendants of triple- Q spin crystals with tetrahedral magnetic order and a large scalar spin chirality. Suchmore » ordered-to-CSL melting transitions may yield lattice realizations of effective Chern-Simons-Higgs field theories. We provides a distinct unifying perspective on the emergence of CSLs and suggests that materials with certain noncoplanar magnetic orders might provide a good starting point to search for CSLs.« less
Computational studies of model disordered and strongly correlated electronic systems
NASA Astrophysics Data System (ADS)
Johri, Sonika
The theory of non-interacting electrons in perfect crystals was completed soon after the advent of quantum mechanics. Though capable of describing electron behaviour in most simple solid state physics systems, this approach falls woefully short of describing condensed matter systems of interest today, and designing the quantum devices of the future. The reason is that nature is never free of disorder, and emergent properties arising from interactions can be clearly seen in the pure, low-dimensional materials that can be engineered today. In this thesis, I address some salient problems in disordered and correlated electronic systems using modern numerical techniques like sparse matrix diagonalization, density matrix renormalization group (DMRG), and large disorder renormalization group (LDRG) methods. The pioneering work of P. W. Anderson, in 1958, led to an understanding of how an electron can stop diffusing and become localized in a region of space when a crystal is sufficiently disordered. Thus disorder can lead to metal-insulator transitions, for instance, in doped semiconductors. Theoretical research on the Anderson disorder model since then has mostly focused on the localization-delocalization phase transition. The localized phase in itself was not thought to exhibit any interesting physics. Our work has uncovered a new singularity in the disorder-averaged inverse participation ratio of wavefunctions within the localized phase, arising from resonant states. The effects of system size, dimension and disorder distribution on the singularity have been studied. A novel wavefunction-based LDRG technique has been designed for the Anderson model which captures the singular behaviour. While localization is well established for a single electron in a disordered potential, the situation is less clear in the case of many interacting particles. Most studies of a many-body localized phase are restricted to a system which is isolated from its environment. Such a condition cannot be achieved perfectly in experiments. A chapter of this thesis is devoted to studying signatures of incomplete localization in a disordered system with interacting particles which is coupled to a bath. . Strongly interacting particles can also give rise to topological phases of matter that have exotic emergent properties, such as quasiparticles with fractional charges and anyonic, or perhaps even non-Abelian statistics. In addition to their intrinsic novelty, these particles (e.g. Majorana fermions) may be the building blocks of future quantum computers. The third part of my thesis focuses on the best experimentally known realizations of such systems - the fractional quantum Hall effect (FQHE) which occurs in two-dimensional electron gases in a strong perpendicular magnetic field. It has been observed in systems such as semiconductor heterostructures and, more recently, graphene. I have developed software for exact diagonalization of the many-body FQHE problem on the surface of a cylinder, a hitherto unstudied type of geometry. This geometry turns out to be optimal for the DMRG algorithm. Using this new geometry, I have studied properties of various fractionally-filled states, computing the overlap between exact ground states and model wavefunctions, their edge excitations, and entanglement spectra. I have calculated the sizes and tunneling amplitudes of quasiparticles, information which is needed to design the interferometers used to experimentally measure their Aharanov-Bohm phase. I have also designed numerical probes of the recently discovered geometric degree of freedom of FQHE states.
Theory of optical transitions in conjugated polymers. II. Real systems
NASA Astrophysics Data System (ADS)
Marcus, Max; Tozer, Oliver Robert; Barford, William
2014-10-01
The theory of optical transitions developed in Barford and Marcus ["Theory of optical transitions in conjugated polymers. I. Ideal systems," J. Chem. Phys. 141, 164101 (2014)] for linear, ordered polymer chains is extended in this paper to model conformationally disordered systems. Our key result is that in the Born-Oppenheimer regime the emission intensities are proportional to S(1)/⟨IPR⟩, where S(1) is the Huang-Rhys parameter for a monomer. ⟨IPR⟩ is the average inverse participation ratio for the emitting species, i.e., local exciton ground states (LEGSs). Since the spatial coherence of LEGSs determines the spatial extent of chromophores, the significance of this result is that it directly relates experimental observables to chromophore sizes (where ⟨IPR⟩ is half the mean chromophore size in monomer units). This result is independent of the chromophore shape, because of the Born-Oppenheimer factorization of the many body wavefunction. We verify this prediction by density matrix renormalization group (DMRG) calculations of the Frenkel-Holstein model in the adiabatic limit for both linear, disordered chains and for coiled, ordered chains. We also model optical spectra for poly(p-phenylene) and poly(p-phenylene-vinylene) oligomers and polymers. For oligomers, we solve the fully quantized Frenkel-Holstein model via the DMRG method. For polymers, we use the much simpler method of solving the one-particle Frenkel model and employ the Born-Oppenheimer expressions relating the effective Franck-Condon factor of a chromophore to its inverse participation ratio. We show that increased disorder decreases chromophore sizes and increases the inhomogeneous broadening, but has a non-monotonic effect on transition energies. We also show that as planarizing the polymer chain increases the exciton band width, it causes the chromophore sizes to increase, the transition energies to decrease, and the broadening to decrease. Finally, we show that the absorption spectra are more broadened than the emission spectra and that the broadening of the absorption spectra increases as the chains become more coiled. This is primarily because absorption occurs to both LEGSs and quasi-extended exciton states (QEESs), and QEES acquire increased intensity as chromophores bend, while emission only occurs from LEGSs.
A Non-Perturbative Treatment of Quantum Impurity Problems in Real Lattices
NASA Astrophysics Data System (ADS)
Allerdt, Andrew C.
Historically, the RKKY or indirect exchange, interaction has been accepted as being able to be described by second order perturbation theory. A typical universal expression is usually given in this context. This approach, however, fails to incorporate many body effects, quantum fluctuations, and other important details. In Chapter 2, a novel numerical approach is developed to tackle these problems in a quasi-exact, non-perturbative manner. Behind the method lies the main concept of being able to exactly map an n-dimensional lattice problem onto a 1-dimensional chain. The density matrix renormalization group algorithm is then employed to solve the newly cast Hamiltonian. In the following chapters, it is demonstrated that conventional RKKY theory does not capture the crucial physics. It is found that the Kondo effect, i.e. the screening of an impurity spin, tends to dominate over a ferromagnetic interaction between impurity spins. Furthermore, it is found that the indirect exchange interaction does not decay algebraically. Instead, there is a crossover upon increasing JK, where impurities favor forming their own independent Kondo states after just a few lattice spacings. This is not a trivial result, as one may naively expect impurities to interact when their conventional Kondo clouds overlap. The spin structure around impurities coupled to the edge of a 2D topological insulator is investigated in Chapter 7. Modeled after materials such as silicine, germanene, and stanene, it is shown with spatial resolution of the lattice that the specific impurity placement plays a key role. Effects of spin-orbit interactions are also discussed. Finally, in the last chapter, transition metal complexes are studied. This really shows the power and versatility of the method developed throughout the work. The spin states of an iron atom in the molecule FeN4C 10 are calculated and compared to DFT, showing the importance of inter-orbital coulomb interactions. Using dynamical DMRG, the density of states for the 3d-orbitals can also be obtained.
Broken Time-Reversal Symmetry in Strongly Correlated Ladder Structures
NASA Astrophysics Data System (ADS)
Troyer, Matthias
2004-03-01
A decade after the first detailed numerical investigations of strongly correlated ladder models, exotic and interesting phases are still being discovered. Besides charge and spin density wave states with broken translational symmetry, and resonating valence bond (RVB) type superconductivity, a time reversal symmetry borken phase was recently found at half filling [J.B. Marston et al., Phys. Rev. Lett 89, 056404 (2002)]. In this talk I will present our recent results of density matrix renormalization group (DMRG) calculations [Phys. Rev. Lett. 90, 186401 (2003)], where we provide, for the first time, in a doped strongly correlated system (two-leg ladder), a controlled theoretical demonstration of the existence of this state in which long-range ordered orbital currents are arranged in a staggered pattern. This phase, which we found to coexist with a charge density wave, is known in the literature under the names ``staggered flux phase'', ``orbital antiferromagnetism'' or ``d-density wave (DDW)''. This brings us closer to recent proposals that this order might be realized in the enigmatic pseudogap phase of the cuprate high temperature superconductors.
NASA Astrophysics Data System (ADS)
Eliëns, I. S.; Ramos, F. B.; Xavier, J. C.; Pereira, R. G.
2016-05-01
We study the influence of reflective boundaries on time-dependent responses of one-dimensional quantum fluids at zero temperature beyond the low-energy approximation. Our analysis is based on an extension of effective mobile impurity models for nonlinear Luttinger liquids to the case of open boundary conditions. For integrable models, we show that boundary autocorrelations oscillate as a function of time with the same frequency as the corresponding bulk autocorrelations. This frequency can be identified as the band edge of elementary excitations. The amplitude of the oscillations decays as a power law with distinct exponents at the boundary and in the bulk, but boundary and bulk exponents are determined by the same coupling constant in the mobile impurity model. For nonintegrable models, we argue that the power-law decay of the oscillations is generic for autocorrelations in the bulk, but turns into an exponential decay at the boundary. Moreover, there is in general a nonuniversal shift of the boundary frequency in comparison with the band edge of bulk excitations. The predictions of our effective field theory are compared with numerical results obtained by time-dependent density matrix renormalization group (tDMRG) for both integrable and nonintegrable critical spin-S chains with S =1 /2 , 1, and 3 /2 .
Quantum Hall ferroelectrics and nematics in multivalley systems
NASA Astrophysics Data System (ADS)
Sodemann, I.; Zhu, Zheng; Fu, Liang
We study broken symmetry states in multivalley quantum Hall systems whose low energy dispersions are anisotropic. Interactions tend to select states that are maximally valley polarized and have nematic character. Interestingly, in certain systems like the recently studied Bismuth (111) surfaces, the formation of these nematic states can be accompanied by appearance of an spontaneous dipole moment, leading to formation of a quantum Hall ferroelectric state. We study these states combining mean field calculations with state of the art DMRG numerical approach, and demonstrate that skyrmion-type charged excitations are extremely robust to the presence of nematic anisotropy. Supported by DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering Award DE-SC0010526. IS. supported by Pappalardo Fellowship. We used Extreme Science and Engineering Discovery Environment (XSEDE) under NSF Grant ACI-1053575.
Encoding the structure of many-body localization with matrix product operators
NASA Astrophysics Data System (ADS)
Pekker, David; Clark, Bryan K.
2015-03-01
Anderson insulators are non-interacting disordered systems which have localized single particle eigenstates. The interacting analogue of Anderson insulators are the Many-Body Localized (MBL) phases. The natural language for representing the spectrum of the Anderson insulator is that of product states over the single-particle modes. We show that product states over Matrix Product Operators of small bond dimension is the corresponding natural language for describing the MBL phases. In this language all of the many-body eigenstates are encode by Matrix Product States (i.e. DMRG wave function) consisting of only two sets of low bond-dimension matrices per site: the Gi matrix corresponding to the local ground state on site i and the Ei matrix corresponding to the local excited state. All 2 n eigenstates can be generated from all possible combinations of these matrices.
Expansion Potentials for Exact Far-from-Equilibrium Spreading of Particles and Energy
Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2015-12-01
We report that the rates at which energy and particle densities move to equalize arbitrarily large temperature and chemical potential differences in an isolated quantum system have an emergent thermodynamical description whenever energy or particle current commutes with the Hamiltonian. Concrete examples include the energy current in the 1D spinless fermion model with nearest-neighbor interactions (XXZ spin chain), energy current in Lorentz-invariant theories or particle current in interacting Bose gases in arbitrary dimension. Even far from equilibrium, these rates are controlled by state functions, which we call "expansion potentials", expressed as integrals of equilibrium Drude weights. This relation between nonequilibriummore » quantities and linear response implies non-equilibrium Maxwell relations for the Drude weights. Lastly, we verify our results via DMRG calculations for the XXZ chain.« less
Testing the accuracy of redshift-space group-finding algorithms
NASA Astrophysics Data System (ADS)
Frederic, James J.
1995-04-01
Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.
Physics of Resonating Valence Bond Spin Liquids
NASA Astrophysics Data System (ADS)
Wildeboer, Julia Saskia
This thesis will investigate various aspects of the physics of resonating valence bond spin liquids. After giving an introduction to the world that lies beyond Landau's priciple of symmetry breaking, e.g. giving an overview of exotic magnetic phases and how they can be described and (possibly) found, we will study a spin-rotationally invariant model system with a known parent Hamiltonian, and argue its ground state to lie within a highly sought after exotic phase, namely the Z2 quantum spin liquid phase. A newly developed numerical procedure --Pfaffian Monte Carlo-- will be introduced to amass evidence that our model Hamiltonian indeed exhibits a Z2 quantum spin liquid phase. Subsequently, we will prove a useful mathematical property of the resonating valence bond states: these states are shown to be linearly independent. Various lattices are investigated concerning this property, and its applications and usefullness are discussed. Eventually, we present a simplified model system describing the interplay of the well known Heisenberg interaction and the Dzyaloshinskii-Moriya (DM) interaction term acting on a sawtooth chain. The effect of the interplay between the two interaction couplings on the phase diagram is investigated. To do so, we employ modern techniques such as the density matrix renormalization group (DMRG) scheme. We find that for weak DM interaction the system exhibits valence bond order. However, a strong enough DM coupling destroys this order.
The artificial-free technique along the objective direction for the simplex algorithm
NASA Astrophysics Data System (ADS)
Boonperm, Aua-aree; Sinapiromsaran, Krung
2014-03-01
The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.
An efficient group multicast routing for multimedia communication
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugen; Yan, Xinfang
2004-04-01
Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.
Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.
2014-01-01
Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439
NASA Astrophysics Data System (ADS)
Motruk, Johannes; Pollmann, Frank
2017-10-01
We investigate the fate of hardcore bosons in a Harper-Hofstadter model which was experimentally realized by Aidelsburger et al. [Nat. Phys. 11, 162 (2015), 10.1038/nphys3171] at half-filling of the lowest band. We discuss the stability of an emergent fractional Chern insulator (FCI) state in a finite region of the phase diagram that is separated from a superfluid state by a first-order transition when tuning the band topology following the protocol used in the experiment. Since crossing a first-order transition is unfavorable for adiabatically preparing the FCI state, we extend the model to stabilize a featureless insulating state. The transition between this phase and the topological state proves to be continuous, providing a path in parameter space along which an FCI state could be adiabatically prepared. To further corroborate this statement, we perform time-dependent DMRG calculations which demonstrate that the FCI state may indeed be reached by adiabatically tuning a simple product state.
An algorithm to identify functional groups in organic molecules.
Ertl, Peter
2017-06-07
The concept of functional groups forms a basis of organic chemistry, medicinal chemistry, toxicity assessment, spectroscopy and also chemical nomenclature. All current software systems to identify functional groups are based on a predefined list of substructures. We are not aware of any program that can identify all functional groups in a molecule automatically. The algorithm presented in this article is an attempt to solve this scientific challenge. An algorithm to identify functional groups in a molecule based on iterative marching through its atoms is described. The procedure is illustrated by extracting functional groups from the bioactive portion of the ChEMBL database, resulting in identification of 3080 unique functional groups. A new algorithm to identify all functional groups in organic molecules is presented. The algorithm is relatively simple and full details with examples are provided, therefore implementation in any cheminformatics toolkit should be relatively easy. The new method allows the analysis of functional groups in large chemical databases in a way that was not possible using previous approaches. Graphical abstract .
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Hirose, Hitoshi; Sarosiek, Konrad; Cavarocchi, Nicholas C
2014-01-01
Gastrointestinal bleed (GIB) is a known complication in patients receiving nonpulsatile ventricular assist devices (VAD). Previously, we reported a new algorithm for the workup of GIB in VAD patients using deep bowel enteroscopy. In this new algorithm, patients underwent fewer procedures, received less transfusions, and took less time to make the diagnosis than the traditional GIB algorithm group. Concurrently, we reviewed the cost-effectiveness of this new algorithm compared with the traditional workup. The procedure charges for the diagnosis and treatment of each episode of GIB was ~ $2,902 in the new algorithm group versus ~ $9,013 in the traditional algorithm group (p < 0.0001). Following the new algorithm in VAD patients with GIB resulted in fewer transfusions and diagnostic tests while attaining a substantial cost savings per episode of bleeding.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Koltz, Peter F; Frey, Jordan D; Bell, Derek E; Girotto, John A; Christiano, Jose G; Langstein, Howard N
2013-11-01
Ventral hernia repair (VHR) continues to evolve and now frequently includes some form of component separation (CS) for large defects. To determine the optimal technique for VHR, we evaluated our outcomes before and after we refined and simplified our algorithm for repair. One hundred five consecutive patients undergoing VHR for large midline hernias over 9 years were examined. Patients were divided into those operated on after (group 1) and before (group 2) the institution of our simplified algorithm. Our algorithm emphasizes careful patient selection and a stepwise approach including, but not limited to, bilateral CS if appropriate, preservation of large perforators, retrorectus mesh placement as appropriate, linea alba or midline fascial closure, and vertical panniculectomy. Primary outcomes evaluated included wound infection, dehiscence, and hernia recurrence. Seventy-eight (74.3%) patients underwent repair using our algorithm (group 1), whereas 27 (25.7%) underwent repair before utilization of this algorithm (group 2). Ninety-eight (93.3%) underwent CS, whereas 7 (6.7%) underwent another form of VHR. There was no significant difference in patient age or defect size. The mean follow-up period in days for patients in group 1 and group 2 were 184.02 and 526.06, respectively (P < 0.001). Hernia recurrence in group 1 was 2.6% versus 29.6% in group 2 (P < 0.001). The incidence of wound infection in group 1 was 10.3%, whereas that in group 2 was 33.3% (P < 0.001). The rate of wound dehiscence in group 1 was 17.9% versus 25.9% in group 2 (P < 0.001). Simplifying and unifying our algorithm for VHR, notably with utilization of CS, has yielded improved results. Recurrence and wound healing complications using this approach are favorable compared with published outcomes.
NASA Astrophysics Data System (ADS)
Huang, Ding-jiang; Ivanova, Nataliya M.
2016-02-01
In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.
NASA Astrophysics Data System (ADS)
Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao
2018-01-01
During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.
Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation
Shen, Liang; Huang, Xiaotao; Fan, Chongyi
2018-01-01
Particle Swarm Optimization (PSO) is a well-known meta-heuristic. It has been widely used in both research and engineering fields. However, the original PSO generally suffers from premature convergence, especially in multimodal problems. In this paper, we propose a double-group PSO (DG-PSO) algorithm to improve the performance. DG-PSO uses a double-group based evolution framework. The individuals are divided into two groups: an advantaged group and a disadvantaged group. The advantaged group works according to the original PSO, while two new strategies are developed for the disadvantaged group. The proposed algorithm is firstly evaluated by comparing it with the other five popular PSO variants and two state-of-the-art meta-heuristics on various benchmark functions. The results demonstrate that DG-PSO shows a remarkable performance in terms of accuracy and stability. Then, we apply DG-PSO to multilevel thresholding for remote sensing image segmentation. The results show that the proposed algorithm outperforms five other popular algorithms in meta-heuristic-based multilevel thresholding, which verifies the effectiveness of the proposed algorithm. PMID:29724013
Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation.
Shen, Liang; Huang, Xiaotao; Fan, Chongyi
2018-05-01
Particle Swarm Optimization (PSO) is a well-known meta-heuristic. It has been widely used in both research and engineering fields. However, the original PSO generally suffers from premature convergence, especially in multimodal problems. In this paper, we propose a double-group PSO (DG-PSO) algorithm to improve the performance. DG-PSO uses a double-group based evolution framework. The individuals are divided into two groups: an advantaged group and a disadvantaged group. The advantaged group works according to the original PSO, while two new strategies are developed for the disadvantaged group. The proposed algorithm is firstly evaluated by comparing it with the other five popular PSO variants and two state-of-the-art meta-heuristics on various benchmark functions. The results demonstrate that DG-PSO shows a remarkable performance in terms of accuracy and stability. Then, we apply DG-PSO to multilevel thresholding for remote sensing image segmentation. The results show that the proposed algorithm outperforms five other popular algorithms in meta-heuristic-based multilevel thresholding, which verifies the effectiveness of the proposed algorithm.
A pragmatic evidence-based clinical management algorithm for burning mouth syndrome.
Kim, Yohanan; Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C
2018-04-01
Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p =0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group ( p =0.001), with an odds ratio of 27.5 [3.1, 242.0]. We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words: Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment.
Blooming Trees: Substructures and Surrounding Groups of Galaxy Clusters
NASA Astrophysics Data System (ADS)
Yu, Heng; Diaferio, Antonaldo; Serra, Ana Laura; Baldi, Marco
2018-06-01
We develop the Blooming Tree Algorithm, a new technique that uses spectroscopic redshift data alone to identify the substructures and the surrounding groups of galaxy clusters, along with their member galaxies. Based on the estimated binding energy of galaxy pairs, the algorithm builds a binary tree that hierarchically arranges all of the galaxies in the field of view. The algorithm searches for buds, corresponding to gravitational potential minima on the binary tree branches; for each bud, the algorithm combines the number of galaxies, their velocity dispersion, and their average pairwise distance into a parameter that discriminates between the buds that do not correspond to any substructure or group, and thus eventually die, and the buds that correspond to substructures and groups, and thus bloom into the identified structures. We test our new algorithm with a sample of 300 mock redshift surveys of clusters in different dynamical states; the clusters are extracted from a large cosmological N-body simulation of a ΛCDM model. We limit our analysis to substructures and surrounding groups identified in the simulation with mass larger than 1013 h ‑1 M ⊙. With mock redshift surveys with 200 galaxies within 6 h ‑1 Mpc from the cluster center, the technique recovers 80% of the real substructures and 60% of the surrounding groups; in 57% of the identified structures, at least 60% of the member galaxies of the substructures and groups belong to the same real structure. These results improve by roughly a factor of two the performance of the best substructure identification algorithm currently available, the σ plateau algorithm, and suggest that our Blooming Tree Algorithm can be an invaluable tool for detecting substructures of galaxy clusters and investigating their complex dynamics.
Liu, Haorui; Yi, Fengyan; Yang, Heli
2016-01-01
The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584
Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli
2012-01-01
OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892
A pragmatic evidence-based clinical management algorithm for burning mouth syndrome
Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C.
2018-01-01
Background Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Material and Methods Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Results Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p=0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group (p=0.001), with an odds ratio of 27.5 [3.1, 242.0]. Conclusions We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words:Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment. PMID:29750091
Computer Aided Synthesis or Measurement Schemes for Telemetry applications
1997-09-02
5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable
Group Counseling Optimization: A Novel Approach
NASA Astrophysics Data System (ADS)
Eita, M. A.; Fahmy, M. M.
A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Procedure of Partitioning Data Into Number of Data Sets or Data Group - A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
The goal of clustering is to decompose a dataset into similar groups based on a objective function. Some already well established clustering algorithms are there for data clustering. Objective of these data clustering algorithms are to divide the data points of the feature space into a number of groups (or classes) so that a predefined set of criteria are satisfied. The article considers the comparative study about the effectiveness and efficiency of traditional data clustering algorithms. For evaluating the performance of the clustering algorithms, Minkowski score is used here for different data sets.
Dynamic Group Formation Based on a Natural Phenomenon
ERIC Educational Resources Information Center
Zedadra, Amina; Lafifi, Yacine; Zedadra, Ouarda
2016-01-01
This paper presents a new approach of learners grouping in collaborative learning systems. This grouping process is based on traces left by learners. The goal is the circular dynamic grouping to achieve collaborative projects. The proposed approach consists of two main algorithms: (1) the circular grouping algorithm and (2) the dynamic grouping…
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo
2017-03-01
Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.
Effects of rooting via out-groups on in-group topology in phylogeny.
Ackerman, Margareta; Brown, Daniel G; Loker, David
2014-01-01
Users of phylogenetic methods require rooted trees, because the direction of time depends on the placement of the root. While phylogenetic trees are typically rooted by using an out-group, this mechanism is inappropriate when the addition of an out-group changes the in-group topology. We perform a formal analysis of phylogenetic algorithms under the inclusion of distant out-groups. It turns out that linkage-based algorithms (including UPGMA) and a class of bisecting methods do not modify the topology of the in-group when an out-group is included. By contrast, the popular neighbour joining algorithm fails this property in a strong sense: every data set can have its structure destroyed by some arbitrarily distant outlier. Furthermore, including multiple outliers can lead to an arbitrary topology on the in-group. The standard rooting approach that uses out-groups may be fundamentally unsuited for neighbour joining.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Grude, Nils; Lindbaek, Morten
2015-01-01
Objective. To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Design. Randomized controlled trial. Setting. Out-of-hours service, Oslo, Norway. Intervention. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010–November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Subjects. Women (n = 441) aged 16–55 years. Mean age in both groups 27 years. Main outcome measures. Number of days until symptomatic resolution. Results. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Conclusion. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder. PMID:25961367
Bollestad, Marianne; Grude, Nils; Lindbaek, Morten
2015-06-01
To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Randomized controlled trial. Out-of-hours service, Oslo, Norway. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010-November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Women (n = 441) aged 16-55 years. Mean age in both groups 27 years. Number of days until symptomatic resolution. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder.
NASA Technical Reports Server (NTRS)
Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor);
1998-01-01
Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.
A new approach of data clustering using a flock of agents.
Picarougne, Fabien; Azzag, Hanene; Venturini, Gilles; Guinot, Christiane
2007-01-01
This paper presents a new bio-inspired algorithm (FClust) that dynamically creates and visualizes groups of data. This algorithm uses the concepts of a flock of agents that move together in a complex manner with simple local rules. Each agent represents one data. The agents move together in a 2D environment with the aim of creating homogeneous groups of data. These groups are visualized in real time, and help the domain expert to understand the underlying structure of the data set, like for example a realistic number of classes, clusters of similar data, isolated data. We also present several extensions of this algorithm, which reduce its computational cost, and make use of a 3D display. This algorithm is then tested on artificial and real-world data, and a heuristic algorithm is used to evaluate the relevance of the obtained partitioning.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
The effects of implementing a nutritional support algorithm in critically ill medical patients.
Sungur, Gonul; Sahin, Habibe; Tasci, Sultan
2015-08-01
To determine the effect of the enteral nutrition algorithm on nutritional support in critically ill medical patients. The quasi-experimental study was conducted at a medical Intensive Care Unit of a university hospital in central Anatolia region in Turkey from June to December 2008. The patients were divided into two equal groups: the historical group was fed in routine clinical applications, while the study group was fed according to the enteral nutritional algorithm. Prior to collecting data, nurses were trained interactively about enteral nutrition and the nutritional support algorithm. The nutrition of the study group was directed by the nurses. Data were recorded during 3 days of care. SPSS 22 was used for statistical analysis. The 40 patients in the study were divided into two equal groups of 20(50%) each. The energy intake of study group was 62% of the prescribed energy requirement on the 1st, 68.5% on the 2nd and 63% on the 3rd day, whereas in the historical group 38%, 56.5% and 60% of the prescribed energy requirement were met. The consumed energy of the historical group on the 1st 2nd and 3rd day was significantly different (p=0.020). In the study group, serum total protein and albumin levels decreased significantly (p<0.05), but pre-albumin and fasting blood glucose levels were not changed on the 1st and 4th day. In the historical group, any of the serum parameters did not change. Enteral nutrition-induced complications, duration of stay in intensive care unit were not significantly different between the groups (p>0.05). The use of standard algorithms for enteral nutrition may be an effective way to meet the nutritional requirements of patients.
The effect of explanations on mathematical reasoning tasks
NASA Astrophysics Data System (ADS)
Norqvist, Mathias
2018-01-01
Studies in mathematics education often point to the necessity for students to engage in more cognitively demanding activities than just solving tasks by applying given solution methods. Previous studies have shown that students that engage in creative mathematically founded reasoning to construct a solution method, perform significantly better in follow up tests than students that are given a solution method and engage in algorithmic reasoning. However, teachers and textbooks, at least occasionally, provide explanations together with an algorithmic method, and this could possibly be more efficient than creative reasoning. In this study, three matched groups practiced with either creative, algorithmic, or explained algorithmic tasks. The main finding was that students that practiced with creative tasks did, outperform the students that practiced with explained algorithmic tasks in a post-test, despite a much lower practice score. The two groups that got a solution method presented, performed similarly in both practice and post-test, even though one group got an explanation to the given solution method. Additionally, there were some differences between the groups in which variables predicted the post-test score.
The Texas medication algorithm project: clinical results for schizophrenia.
Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven
2004-01-01
In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B
2017-08-01
Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.
Time-aware service-classified spectrum defragmentation algorithm for flex-grid optical networks
NASA Astrophysics Data System (ADS)
Qiu, Yang; Xu, Jing
2018-01-01
By employing sophisticated routing and spectrum assignment (RSA) algorithms together with a finer spectrum granularity (namely frequency slot) in resource allocation procedures, flex-grid optical networks can accommodate diverse kinds of services with high spectrum-allocation flexibility and resource-utilization efficiency. However, the continuity and the contiguity constraints in spectrum allocation procedures may always induce some isolated, small-sized, and unoccupied spectral blocks (known as spectrum fragments) in flex-grid optical networks. Although these spectrum fragments are left unoccupied, they can hardly be utilized by the subsequent service requests directly because of their spectral characteristics and the constraints in spectrum allocation. In this way, the existence of spectrum fragments may exhaust the available spectrum resources for a coming service request and thus worsens the networking performance. Therefore, many reactive defragmentation algorithms have been proposed to handle the fragmented spectrum resources via re-optimizing the routing paths and the spectrum resources for the existing services. But the routing-path and the spectrum-resource re-optimization in reactive defragmentation algorithms may possibly disrupt the traffic of the existing services and require extra components. By comparison, some proactive defragmentation algorithms (e.g. fragmentation-aware algorithms) were proposed to suppress spectrum fragments from their generation instead of handling the fragmented spectrum resources. Although these proactive defragmentation algorithms induced no traffic disruption and required no extra components, they always left the generated spectrum fragments unhandled, which greatly affected their efficiency in spectrum defragmentation. In this paper, by comprehensively considering the characteristics of both the reactive and the proactive defragmentation algorithms, we proposed a time-aware service-classified (TASC) spectrum defragmentation algorithm, which simultaneously employed proactive and reactive mechanisms in suppressing spectrum fragments with the awareness of services' types and their duration times. By dividing the spectrum resources into several flexible groups according to services' types and limiting both the spectrum allocation and the spectrum re-tuning for a certain service inside one specific spectrum group according to its type, the proposed TASC defragmentation algorithm cannot only suppress spectrum fragments from generation inside each spectrum group, but also handle the fragments generated between two adjacent groups. In this way, the proposed TASC algorithm gains higher efficiency in suppressing spectrum fragments than both the reactive and the proactive defragmentation algorithms. Additionally, as the generation of spectrum fragments is retrained between spectrum groups and the defragmentation procedure is limited inside each spectrum group, the induced traffic disruption for the existing services can be possibly reduced. Besides, the proposed TASC defragmentation algorithm always re-tunes the spectrum resources of the service with the maximum duration time first in spectrum defragmentation procedure, which can further reduce spectrum fragments because of the fact that the services with longer duration times always have higher possibility in inducing spectrum fragments than the services with shorter duration times. The simulation results show that the proposed TASC defragmentation algorithm can significantly reduce the number of the generated spectrum fragments while improving the service blocking performance.
A survey on keeler’s theorem and application of symmetric group for swapping game
NASA Astrophysics Data System (ADS)
Pratama, Yohanssen; Prakasa, Yohenry
2017-01-01
An episode of Futurama features two-body mind-switching machine which will not work more than once on the same pair of bodies. The problem is “Can the switching be undone so as to restore all minds to their original bodies?” Ken Keeler found an algorithm that undoes any mind-scrambling permutation, and Lihua Huang found the refinement of it. We look on the process how the puzzle can be modeled in terms group theory and using symmetric group to solve it and find the most efficient way of it. After that we will try to build the algorithm to implement it into the computer program and see the effect of the transposition notion into the algorithm complexity. The number of steps that given by the algorithm will be different and one of algorithms will have the advantage in terms of efficiency. We compare Ken Keeler and Lihua Huang algorithms to see is there any difference if we run it in the computer program, although the complexity could be remain the same.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Garnet Kin-Lic
2017-04-30
This is the final technical report. We briefly describe some selected results below. Developments in density matrix embedding. DMET is a quantum embedding theory that we introduced at the beginning of the last funding period, around 2012-2013. Since the first DMET papers, which demonstrated proof-of- principle calculations on the Hubbard model and hydrogen rings, we have carried out a number of different developments, including: Extending the DMET technology to compute broken symmetry phases, including magnetic phases and super- conductivity (Pub. 13); Calibrating the accuracy of DMET and its cluster size convergence against other methods, and formulation of a dynamical clustermore » analog (Pubs. 4, 10) (see Fig. 1); Implementing DMET for ab-initio molecular calculations, and exploring different self-consistency criteria (Pubs. 9, 14); Using embedding to defi ne quantum classical interfaces Pub. 2; Formulating DMET for spectral functions (Pub. 7) (see Fig. 1); Extending DMET to coupled fermion-boson problems (Pub. 12). Together with these embedding developments, we have also implemented a wide variety of impurity solvers within our DMET framework, including DMRG (Pub. 3), AFQMC (Pub. 10), and coupled cluster theory (CC) (Pub. 9).« less
Inui, Hiroshi; Taketomi, Shuji; Nakamura, Kensuke; Sanada, Takaki; Tanaka, Sakae; Nakagawa, Takumi
2013-05-01
Few studies have demonstrated improvement in accuracy of rotational alignment using image-free navigation systems mainly due to the inconsistent registration of anatomical landmarks. We have used an image-free navigation for total knee arthroplasty, which adopts the average algorithm between two reference axes (transepicondylar axis and axis perpendicular to the Whiteside axis) for femoral component rotation control. We hypothesized that addition of another axis (condylar twisting axis measured on a preoperative radiograph) would improve the accuracy. One group using the average algorithm (double-axis group) was compared with the other group using another axis to confirm the accuracy of the average algorithm (triple-axis group). Femoral components were more accurately implanted for rotational alignment in the triple-axis group (ideal: triple-axis group 100%, double-axis group 82%, P<0.05). Copyright © 2013 Elsevier Inc. All rights reserved.
Effect of registration on corpus callosum population differences found with DBM analysis
NASA Astrophysics Data System (ADS)
Han, Zhaoying; Thornton-Wells, Tricia A.; Gore, John C.; Dawant, Benoit M.
2011-03-01
Deformation Based Morphometry (DBM) is a relatively new method used for characterizing anatomical differences among populations. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to one standard coordinate system. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithm on population differences that may be uncovered through DBM. In this study, we compared DBM results obtained with five well established non-rigid registration algorithms on the corpus callosum (CC) in thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Basis Algorithm (ABA); (2) Image Registration Toolkit (IRTK); (3) FSL Nonlinear Image Registration Tool (FSL); (4) Automatic Registration Tools (ART); and (5) the normalization algorithm available in SPM8. For each algorithm, the 3D deformation fields from all subjects to the atlas were obtained and used to calculate the Jacobian determinant (JAC) at each voxel in the mid-sagittal slice of the CC. The mean JAC maps for each group were compared quantitatively across different nonrigid registration algorithms. An ANOVA test performed on the means of the JAC over the Genu and the Splenium ROIs shows the JAC differences between nonrigid registration algorithms are statistically significant over the Genu for both groups and over the Splenium for the NC group. These results suggest that it is important to consider the effect of registration when using DBM to compute morphological differences in populations.
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H
2007-11-01
Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.
ERIC Educational Resources Information Center
Lane, Irving M.; And Others
1981-01-01
Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…
NASA Astrophysics Data System (ADS)
Mononen, Mika E.; Tanska, Petri; Isaksson, Hanna; Korhonen, Rami K.
2016-02-01
We present a novel algorithm combined with computational modeling to simulate the development of knee osteoarthritis. The degeneration algorithm was based on excessive and cumulatively accumulated stresses within knee joint cartilage during physiological gait loading. In the algorithm, the collagen network stiffness of cartilage was reduced iteratively if excessive maximum principal stresses were observed. The developed algorithm was tested and validated against experimental baseline and 4-year follow-up Kellgren-Lawrence grades, indicating different levels of cartilage degeneration at the tibiofemoral contact region. Test groups consisted of normal weight and obese subjects with the same gender and similar age and height without osteoarthritic changes. The algorithm accurately simulated cartilage degeneration as compared to the Kellgren-Lawrence findings in the subject group with excess weight, while the healthy subject group’s joint remained intact. Furthermore, the developed algorithm followed the experimentally found trend of cartilage degeneration in the obese group (R2 = 0.95, p < 0.05 experiments vs. model), in which the rapid degeneration immediately after initiation of osteoarthritis (0-2 years, p < 0.001) was followed by a slow or negligible degeneration (2-4 years, p > 0.05). The proposed algorithm revealed a great potential to objectively simulate the progression of knee osteoarthritis.
KODA, MASAHIKO; TOKUNAGA, SHIHO; MATONO, TOMOMITSU; SUGIHARA, TAKAAKI; NAGAHARA, TAKAKAZU; MURAWAKI, YOSHIKAZU
2011-01-01
The purpose of the present study was to compare the size and configuration of the ablation zones created by SuperSlim and CoAccess electrodes, using various ablation algorithms in ex vivo bovine liver and in clinical cases. In the experimental study, we ablated explanted bovine liver using 2 types of electrodes and 4 ablation algorithms (combinations of incremental power supply, stepwise expansion and additional low-power ablation) and evaluated the ablation area and time. In the clinical study, we compared the ablation volume and the shape of the ablation zone between both electrodes in 23 hepatocellular carcinoma (HCC) cases with the best algorithm (incremental power supply, stepwise expansion and additional low-power ablation) as derived from the experimental study. In the experimental study, the ablation area and time by the CoAccess electrode were significantly greater compared to those by the SuperSlim electrode for the single-step (algorithm 1, p=0.0209 and 0.0325, respectively) and stepwise expansion algorithms (algorithm 2, p=0.0002 and <0.0001, respectively; algorithm 3, p= 0.006 and 0.0407, respectively). However, differences were not significant for the additional low-power ablation algorithm. In the clinical study, the ablation volume and time in the CoAccess group were significantly larger and longer, respectively, compared to those in the SuperSlim group (p=0.0242 and 0.009, respectively). Round ablation zones were acquired in 91.7% of the CoAccess group, while irregular ablation zones were obtained in 45.5% of the SuperSlim group (p=0.0428). In conclusion, the CoAccess electrode achieves larger and more uniform ablation zones compared with the SuperSlim electrode, though it requires longer ablation times in experimental and clinical studies. PMID:22977647
Application of the Hughes-LIU algorithm to the 2-dimensional heat equation
NASA Technical Reports Server (NTRS)
Malkus, D. S.; Reichmann, P. I.; Haftka, R. T.
1982-01-01
An implicit explicit algorithm for the solution of transient problems in structural dynamics is described. The method involved dividing the finite elements into implicit and explicit groups while automatically satisfying the conditions. This algorithm is applied to the solution of the linear, transient, two dimensional heat equation subject to an initial condition derived from the soluton of a steady state problem over an L-shaped region made up of a good conductor and an insulating material. Using the IIT/PRIME computer with virtual memory, a FORTRAN computer program code was developed to make accuracy, stability, and cost comparisons among the fully explicit Euler, the Hughes-Liu, and the fully implicit Crank-Nicholson algorithms. The Hughes-Liu claim that the explicit group governs the stability of the entire region while maintaining the unconditional stability of the implicit group is illustrated.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
Learning algorithms for human-machine interfaces.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2009-05-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.
Learning Algorithms for Human–Machine Interfaces
Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.
2012-01-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886
The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2006 update.
Moore, Troy A; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Robinson, Delbert G; Schooler, Nina R; Shon, Steven P; Stroup, T Scott; Miller, Alexander L
2007-11-01
A panel of academic psychiatrists and pharmacists, clinicians from the Texas public mental health system, advocates, and consumers met in June 2006 in Dallas, Tex., to review recent evidence in the pharmacologic treatment of schizophrenia. The goal of the consensus conference was to update and revise the Texas Medication Algorithm Project (TMAP) algorithm for schizophrenia used in the Texas Implementation of Medication Algorithms, a statewide quality assurance program for treatment of major psychiatric illness. Four questions were identified via premeeting teleconferences. (1) Should antipsychotic treatment of first-episode schizophrenia be different from that of multiepisode schizophrenia? (2) In which algorithm stages should first-generation antipsychotics (FGAs) be an option? (3) How many antipsychotic trials should precede a clozapine trial? (4) What is the status of augmentation strategies for clozapine? Subgroups reviewed the evidence in each area and presented their findings at the conference. The algorithm was updated to incorporate the following recommendations. (1) Persons with first-episode schizophrenia typically require lower antipsychotic doses and are more sensitive to side effects such as weight gain and extrapyramidal symptoms (group consensus). Second-generation antipsychotics (SGAs) are preferred for treatment of first-episode schizophrenia (majority opinion). (2) FGAs should be included in algorithm stages after first episode that include SGAs other than clozapine as options (group consensus). (3) The recommended number of trials of other antipsychotics that should precede a clozapine trial is 2, but earlier use of clozapine should be considered in the presence of persistent problems such as suicidality, comorbid violence, and substance abuse (group consensus). (4) Augmentation is reasonable for persons with inadequate response to clozapine, but published results on augmenting agents have not identified replicable positive results (group consensus). These recommendations are meant to provide a framework for clinical decision making, not to replace clinical judgment. As with any algorithm, treatment practices will evolve beyond the recommendations of this consensus conference as new evidence and additional medications become available.
Kang, J E; Yu, J M; Choi, J H; Chung, I-M; Pyun, W B; Kim, S A; Lee, E K; Han, N Y; Yoon, J-H; Oh, J M; Rhie, S J
2018-06-01
Drug therapies are critical for preventing secondary complications in acute coronary syndrome (ACS). The purpose of this study was to develop and apply a pharmaceutical care service (PCS) algorithm for ACS and confirm that it is applicable through a prospective clinical trial. The ACS-PCS algorithm was developed according to extant evidence-based treatment and pharmaceutical care guidelines. Quality assurance was conducted through two methods: literature comparison and expert panel evaluation. The literature comparison was used to compare the content of the algorithm with the referenced guidelines. Expert evaluations were conducted by nine experts for 75 questionnaire items. A trial was conducted to confirm its effectiveness. Seventy-nine patients were assigned to either the pharmacist-included multidisciplinary team care (MTC) group or the usual care (UC) group. The endpoints of the trial were the prescription rate of two important drugs, readmission, emergency room (ER) visit and mortality. The main frame of the algorithm was structured with three tasks: medication reconciliation, medication optimization and transition of care. The contents and context of the algorithm were compliant with class I recommendations and the main service items from the evidence-based guidelines. Opinions from the expert panel were mostly positive. There were significant differences in beta-blocker prescription rates in the overall period (P = .013) and ER visits (four cases, 9.76%, P = .016) in the MTC group compared to the UC group, respectively. We developed a PCS algorithm for ACS based on the contents of evidence-based drug therapy and the core concept of pharmacist services. © 2018 John Wiley & Sons Ltd.
Multiagent Reinforcement Learning With Sparse Interactions by Negotiation and Knowledge Transfer.
Zhou, Luowei; Yang, Pei; Chen, Chunlin; Gao, Yang
2017-05-01
Reinforcement learning has significant applications for multiagent systems, especially in unknown dynamic environments. However, most multiagent reinforcement learning (MARL) algorithms suffer from such problems as exponential computation complexity in the joint state-action space, which makes it difficult to scale up to realistic multiagent problems. In this paper, a novel algorithm named negotiation-based MARL with sparse interactions (NegoSIs) is presented. In contrast to traditional sparse-interaction-based MARL algorithms, NegoSI adopts the equilibrium concept and makes it possible for agents to select the nonstrict equilibrium-dominating strategy profile (nonstrict EDSP) or meta equilibrium for their joint actions. The presented NegoSI algorithm consists of four parts: 1) the equilibrium-based framework for sparse interactions; 2) the negotiation for the equilibrium set; 3) the minimum variance method for selecting one joint action; and 4) the knowledge transfer of local Q -values. In this integrated algorithm, three techniques, i.e., unshared value functions, equilibrium solutions, and sparse interactions are adopted to achieve privacy protection, better coordination and lower computational complexity, respectively. To evaluate the performance of the presented NegoSI algorithm, two groups of experiments are carried out regarding three criteria: 1) steps of each episode; 2) rewards of each episode; and 3) average runtime. The first group of experiments is conducted using six grid world games and shows fast convergence and high scalability of the presented algorithm. Then in the second group of experiments NegoSI is applied to an intelligent warehouse problem and simulated results demonstrate the effectiveness of the presented NegoSI algorithm compared with other state-of-the-art MARL algorithms.
Wang, Wendy T J; Olson, Sharon L; Campbell, Anne H; Hanten, William P; Gleeson, Peggy B
2003-03-01
The purpose of this study was to determine the effectiveness of an individualized physical therapy intervention in treating neck pain based on a clinical reasoning algorithm. Treatment effectiveness was examined by assessing changes in impairment, physical performance, and disability in response to intervention. One treatment group of 30 patients with neck pain completed physical therapy treatment. The control group of convenience was formed by a cohort group of 27 subjects who also had neck pain but did not receive treatment for various reasons. There were no significant differences between groups in demographic data and the initial test scores of the outcome measures. A quasi-experimental, nonequivalent, pretest-posttest control group design was used. A physical therapist rendered an eclectic intervention to the treatment group based on a clinical decision-making algorithm. Treatment outcome measures included the following five dependent variables: cervical range of motion, numeric pain rating, timed weighted overhead endurance, the supine capital flexion endurance test, and the Patient Specific Functional Scale. Both the treatment and control groups completed the initial and follow-up examinations, with an average duration of 4 wk between tests. Five mixed analyses of variance with follow-up tests showed a significant difference for all outcome measures in the treatment group compared with the control group. After an average 4 wk of physical therapy intervention, patients in the treatment group demonstrated statistically significant increases of cervical range of motion, decrease of pain, increases of physical performance measures, and decreases in the level of disability. The control group showed no differences in all five outcome variables between the initial and follow-up test scores. This study delineated algorithm-based clinical reasoning strategies for evaluating and treating patients with cervical pain. The algorithm can help clinicians classify patients with cervical pain into clinical patterns and provides pattern-specific guidelines for physical therapy interventions. An organized and specific physical therapy program was effective in improving the status of patients with neck pain.
Weigel, Ralf; Schlickum, Linda; Weisser, Gerald; Krauss, Joachim K
2015-01-01
Surgical treatment for chronic subdural haematoma (CSH) has been analysed by applying evidence-based medicine (EBM) criteria earlier. Whether implementation of EBM-derived key factors into an optimised treatment algorithm would improve outcome, however, needs to be clarified. Symptomatic patients with CSH who fulfilled the inclusion criteria were either assigned to an optimised treatment algorithm (OA-EBM group) or to a control group treated by the standard departmental surgical technique (SDST group) in a prospective design. For the OA-EBM algorithm only one burr hole, extensive intraoperative irrigation and a closed system drainage with meticulous avoidance of entry of air was mandatory. A two-catheter technique was used to reduce intracavital air. Final endpoints were neurological outcome (Markwalder Score), recurrence and the amount of intracranial air. A total of 93 out of 117 patients were evaluated accounting for 113 cases because 20 patients had bilateral haematomas. Demographic data of 68 cases in the SDST group did not differ from 45 cases in the OA-EBM group. The Markwalder Score showed greater improvement in the OA-EBM group (0.5 ± 0.6 vs. 1.0 ± 1.0, p = 0.003). The recurrence rate was 18% (12 patients) in the SDST group versus 2% (1 patient) in the OA-EBM group (p < 0.05). The amount of intracranial air was significantly lower in the OA-EBM group (3.3 ± 5.0 cm(3) vs. 5.2 ± 7.7 cm(3)) with p = 0.04. In the standard group computerised tomography scanning was performed slightly earlier (3 ± 1.7 days vs. 3.6 ± 1.4 days). When comparing only non-recurrent cases in both groups no significant difference was apparent. Implementation of EBM key factors into a treatment algorithm for CSH can improve neurological outcome in a typical neurosurgical department, reduce recurrence and minimise the amount of postoperative air within the haematoma cavity.
A multi-group firefly algorithm for numerical optimization
NASA Astrophysics Data System (ADS)
Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun
2017-08-01
To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.
Analysis of retinal and cortical components of Retinex algorithms
NASA Astrophysics Data System (ADS)
Yeonan-Kim, Jihyun; Bertalmío, Marcelo
2017-05-01
Following Land and McCann's first proposal of the Retinex theory, numerous Retinex algorithms that differ considerably both algorithmically and functionally have been developed. We clarify the relationships among various Retinex families by associating their spatial processing structures to the neural organizations in the retina and the primary visual cortex in the brain. Some of the Retinex algorithms have a retina-like processing structure (Land's designator idea and NASA Retinex), and some show a close connection with the cortical structures in the primary visual area of the brain (two-dimensional L&M Retinex). A third group of Retinexes (the variational Retinex) manifests an explicit algorithmic relation to Wilson-Cowan's physiological model. We intend to overview these three groups of Retinexes with the frame of reference in the biological visual mechanisms.
An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.
Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H
2018-06-01
We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live and virtual control outcomes. This study provides an initial framework for creating virtual controls using a step-by-step procedure that can now be revised and validated using other prevention trial data.
Mononen, Mika E.; Tanska, Petri; Isaksson, Hanna; Korhonen, Rami K.
2016-01-01
We present a novel algorithm combined with computational modeling to simulate the development of knee osteoarthritis. The degeneration algorithm was based on excessive and cumulatively accumulated stresses within knee joint cartilage during physiological gait loading. In the algorithm, the collagen network stiffness of cartilage was reduced iteratively if excessive maximum principal stresses were observed. The developed algorithm was tested and validated against experimental baseline and 4-year follow-up Kellgren-Lawrence grades, indicating different levels of cartilage degeneration at the tibiofemoral contact region. Test groups consisted of normal weight and obese subjects with the same gender and similar age and height without osteoarthritic changes. The algorithm accurately simulated cartilage degeneration as compared to the Kellgren-Lawrence findings in the subject group with excess weight, while the healthy subject group’s joint remained intact. Furthermore, the developed algorithm followed the experimentally found trend of cartilage degeneration in the obese group (R2 = 0.95, p < 0.05; experiments vs. model), in which the rapid degeneration immediately after initiation of osteoarthritis (0–2 years, p < 0.001) was followed by a slow or negligible degeneration (2–4 years, p > 0.05). The proposed algorithm revealed a great potential to objectively simulate the progression of knee osteoarthritis. PMID:26906749
Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H
2006-01-01
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.
Cheng, Jun; Zhao, Fei; Xia, Yinyin; Zhang, Hui; Wilkinson, Ewan; Das, Mrinalini; Li, Jie; Chen, Wei; Hu, Dongmei; Jeyashree, Kathiresan; Wang, Lixia
2017-01-01
Objective To calculate the yield and cost per diagnosed tuberculosis (TB) case for three World Health Organization screening algorithms and one using the Chinese National TB program (NTP) TB suspect definitions, using data from a TB prevalence survey of people aged 65 years and over in China, 2013. Methods This was an analytic study using data from the above survey. Risk groups were defined and the prevalence of new TB cases in each group calculated. Costs of each screening component were used to give indicative costs per case detected. Yield, number needed to screen (NNS) and cost per case were used to assess the algorithms. Findings The prevalence survey identified 172 new TB cases in 34,250 participants. Prevalence varied greatly in different groups, from 131/100,000 to 4651/ 100,000. Two groups were chosen to compare the algorithms. The medium-risk group (living in a rural area: men, or previous TB case, or close contact or a BMI <18.5, or tobacco user) had appreciably higher cost per case (USD 221, 298 and 963) in the three algorithms than the high-risk group (all previous TB cases, all close contacts). (USD 72, 108 and 309) but detected two to four times more TB cases in the population. Using a Chest x-ray as the initial screening tool in the medium risk group cost the most (USD 963), and detected 67% of all the new cases. Using the NTP definition of TB suspects made little difference. Conclusions To “End TB”, many more TB cases have to be identified. Screening only the highest risk groups identified under 14% of the undetected cases,. To “End TB”, medium risk groups will need to be screened. Using a CXR for initial screening results in a much higher yield, at what should be an acceptable cost. PMID:28594824
A grouping method based on grid density and relationship for crowd evacuation simulation
NASA Astrophysics Data System (ADS)
Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin
2017-05-01
Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.
Alici, Ibrahim Onur; Yılmaz Demirci, Nilgün; Yılmaz, Aydın; Karakaya, Jale; Özaydın, Esra
2016-09-01
There are several papers on the sonographic features of mediastinal lymph nodes affected by several diseases, but none gives the importance and clinical utility of the features. In order to find out which lymph node should be sampled in a particular nodal station during endobronchial ultrasound, we investigated the diagnostic performances of certain sonographic features and proposed an algorithmic approach. We retrospectively analyzed 1051 lymph nodes and randomly assigned them into a preliminary experimental and a secondary study group. The diagnostic performances of the sonographic features (gray scale, echogeneity, shape, size, margin, presence of necrosis, presence of calcification and absence of central hilar structure) were calculated, and an algorithm for lymph node sampling was obtained with decision tree analysis in the experimental group. Later, a modified algorithm was applied to the patients in the study group to give the accuracy. The demographic characteristics of the patients were not statistically significant between the primary and the secondary groups. All of the features were discriminative between malignant and benign diseases. The modified algorithm sensitivity, specificity, and positive and negative predictive values and diagnostic accuracy for detecting metastatic lymph nodes were 100%, 51.2%, 50.6%, 100% and 67.5%, respectively. In this retrospective analysis, the standardized sonographic classification system and the proposed algorithm performed well in choosing the node that should be sampled in a particular station during endobronchial ultrasound. © 2015 John Wiley & Sons Ltd.
Internet Protocol Security (IPSEC): Testing and Implications on IPv4 and IPv6 Networks
2008-08-27
Message Authentication Code-Message Digest 5-96). Due to the processing power consumption and slowness of public key authentication methods, RSA ...MODP) group with a 768 -bit modulus 2. a MODP group with a 1024-bit modulus 3. an Elliptic Curve Group over GF[ 2n ] (EC2N) group with a 155-bit...nonces, digital signatures using the Digital Signature Algorithm, and the Rivest-Shamir- Adelman ( RSA ) algorithm. For more information about the
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Bhattacharya, Anindya; De, Rajat K
2010-08-01
Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to be consulted before installation and execution of the software. Copyright 2010 Elsevier Inc. All rights reserved.
An early-biomarker algorithm predicts lethal graft-versus-host disease and survival
Hartwell, Matthew J.; Özbek, Umut; Holler, Ernst; Major-Monfried, Hannah; Reddy, Pavan; Aziz, Mina; Hogan, William J.; Ayuk, Francis; Efebera, Yvonne A.; Hexner, Elizabeth O.; Bunworasate, Udomsak; Qayed, Muna; Ordemann, Rainer; Wölfl, Matthias; Mielke, Stephan; Chen, Yi-Bin; Devine, Steven; Jagasia, Madan; Kitko, Carrie L.; Litzow, Mark R.; Kröger, Nicolaus; Locatelli, Franco; Morales, George; Nakamura, Ryotaro; Reshef, Ran; Rösler, Wolf; Weber, Daniela; Yanik, Gregory A.; Levine, John E.; Ferrara, James L.M.
2017-01-01
BACKGROUND. No laboratory test can predict the risk of nonrelapse mortality (NRM) or severe graft-versus-host disease (GVHD) after hematopoietic cellular transplantation (HCT) prior to the onset of GVHD symptoms. METHODS. Patient blood samples on day 7 after HCT were obtained from a multicenter set of 1,287 patients, and 620 samples were assigned to a training set. We measured the concentrations of 4 GVHD biomarkers (ST2, REG3α, TNFR1, and IL-2Rα) and used them to model 6-month NRM using rigorous cross-validation strategies to identify the best algorithm that defined 2 distinct risk groups. We then applied the final algorithm in an independent test set (n = 309) and validation set (n = 358). RESULTS. A 2-biomarker model using ST2 and REG3α concentrations identified patients with a cumulative incidence of 6-month NRM of 28% in the high-risk group and 7% in the low-risk group (P < 0.001). The algorithm performed equally well in the test set (33% vs. 7%, P < 0.001) and the multicenter validation set (26% vs. 10%, P < 0.001). Sixteen percent, 17%, and 20% of patients were at high risk in the training, test, and validation sets, respectively. GVHD-related mortality was greater in high-risk patients (18% vs. 4%, P < 0.001), as was severe gastrointestinal GVHD (17% vs. 8%, P < 0.001). The same algorithm can be successfully adapted to define 3 distinct risk groups at GVHD onset. CONCLUSION. A biomarker algorithm based on a blood sample taken 7 days after HCT can consistently identify a group of patients at high risk for lethal GVHD and NRM. FUNDING. The National Cancer Institute, American Cancer Society, and the Doris Duke Charitable Foundation. PMID:28194439
Pairwise gene GO-based measures for biclustering of high-dimensional expression data.
Nepomuceno, Juan A; Troncoso, Alicia; Nepomuceno-Chamorro, Isabel A; Aguilar-Ruiz, Jesús S
2018-01-01
Biclustering algorithms search for groups of genes that share the same behavior under a subset of samples in gene expression data. Nowadays, the biological knowledge available in public repositories can be used to drive these algorithms to find biclusters composed of groups of genes functionally coherent. On the other hand, a distance among genes can be defined according to their information stored in Gene Ontology (GO). Gene pairwise GO semantic similarity measures report a value for each pair of genes which establishes their functional similarity. A scatter search-based algorithm that optimizes a merit function that integrates GO information is studied in this paper. This merit function uses a term that addresses the information through a GO measure. The effect of two possible different gene pairwise GO measures on the performance of the algorithm is analyzed. Firstly, three well known yeast datasets with approximately one thousand of genes are studied. Secondly, a group of human datasets related to clinical data of cancer is also explored by the algorithm. Most of these data are high-dimensional datasets composed of a huge number of genes. The resultant biclusters reveal groups of genes linked by a same functionality when the search procedure is driven by one of the proposed GO measures. Furthermore, a qualitative biological study of a group of biclusters show their relevance from a cancer disease perspective. It can be concluded that the integration of biological information improves the performance of the biclustering process. The two different GO measures studied show an improvement in the results obtained for the yeast dataset. However, if datasets are composed of a huge number of genes, only one of them really improves the algorithm performance. This second case constitutes a clear option to explore interesting datasets from a clinical point of view.
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets
Wernisch, Lorenz
2017-01-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.
Gabasova, Evelina; Reid, John; Wernisch, Lorenz
2017-10-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Two-level structural sparsity regularization for identifying lattices and defects in noisy images
Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...
2018-03-09
Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less
Two-level structural sparsity regularization for identifying lattices and defects in noisy images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xin; Belianinov, Alex; Dyck, Ondrej E.
Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less
A New Aloha Anti-Collision Algorithm Based on CDMA
NASA Astrophysics Data System (ADS)
Bai, Enjian; Feng, Zhu
The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.
Mining the National Career Assessment Examination Result Using Clustering Algorithm
NASA Astrophysics Data System (ADS)
Pagudpud, M. V.; Palaoag, T. T.; Padirayon, L. M.
2018-03-01
Education is an essential process today which elicits authorities to discover and establish innovative strategies for educational improvement. This study applied data mining using clustering technique for knowledge extraction from the National Career Assessment Examination (NCAE) result in the Division of Quirino. The NCAE is an examination given to all grade 9 students in the Philippines to assess their aptitudes in the different domains. Clustering the students is helpful in identifying students’ learning considerations. With the use of the RapidMiner tool, clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), k-means, k-medoid, expectation maximization clustering, and support vector clustering algorithms were analyzed. The silhouette indexes of the said clustering algorithms were compared, and the result showed that the k-means algorithm with k = 3 and silhouette index equal to 0.196 is the most appropriate clustering algorithm to group the students. Three groups were formed having 477 students in the determined group (cluster 0), 310 proficient students (cluster 1) and 396 developing students (cluster 2). The data mining technique used in this study is essential in extracting useful information from the NCAE result to better understand the abilities of students which in turn is a good basis for adopting teaching strategies.
Effect of Algorithms' Multiple Representations in the Context of Programming Education
ERIC Educational Resources Information Center
Siozou, Stefania; Tselios, Nikolaos; Komis, Vassilis
2008-01-01
Purpose: The purpose of this paper is to compare the effect of different representations while teaching basic algorithmic concepts to novice programmers. Design/methodology/approach: A learning activity was designed and mediated with two conceptually different learning environments, each one used by a different group. The first group used the…
Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji
2017-01-01
High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
On the numeric integration of dynamic attitude equations
NASA Technical Reports Server (NTRS)
Crouch, P. E.; Yan, Y.; Grossman, Robert
1992-01-01
We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Christian, Hugh J.; Blakeslee, Richard; Boccippio, Dennis J.; Goodman, Steve J.; Boeck, William
2006-01-01
We describe the clustering algorithm used by the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) for combining the lightning pulse data into events, groups, flashes, and areas. Events are single pixels that exceed the LIS/OTD background level during a single frame (2 ms). Groups are clusters of events that occur within the same frame and in adjacent pixels. Flashes are clusters of groups that occur within 330 ms and either 5.5 km (for LIS) or 16.5 km (for OTD) of each other. Areas are clusters of flashes that occur within 16.5 km of each other. Many investigators are utilizing the LIS/OTD flash data; therefore, we test how variations in the algorithms for the event group and group-flash clustering affect the flash count for a subset of the LIS data. We divided the subset into areas with low (1-3), medium (4-15), high (16-63), and very high (64+) flashes to see how changes in the clustering parameters affect the flash rates in these different sizes of areas. We found that as long as the cluster parameters are within about a factor of two of the current values, the flash counts do not change by more than about 20%. Therefore, the flash clustering algorithm used by the LIS and OTD sensors create flash rates that are relatively insensitive to reasonable variations in the clustering algorithms.
A review on the multivariate statistical methods for dimensional reduction studies
NASA Astrophysics Data System (ADS)
Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei
2017-05-01
In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?
Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M. Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz
2015-01-01
Background Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Methodology/Principal Findings Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). Conclusion The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study. PMID:26161864
Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?
Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz
2015-01-01
Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study.
The Mucciardi-Gose Clustering Algorithm and Its Applications in Automatic Pattern Recognition.
A procedure known as the Mucciardi- Gose clustering algorithm, CLUSTR, for determining the geometrical or statistical relationships among groups of N...discussion of clustering algorithms is given; the particular advantages of the Mucciardi- Gose procedure are described. The mathematical basis for, and the
Deployment Optimization for Embedded Flight Avionics Systems
2011-11-01
the iterations, the best solution(s) that evolved out from the group is output as the result. Although metaheuristic algorithms are powerful, they...that other design constraints are met—ScatterD uses metaheuristic algorithms to seed the bin-packing algorithm . In particular, metaheuristic ... metaheuristic algorithms to search the design space—and then using bin-packing to allocate software tasks to processors—ScatterD can generate
Multispectral iris recognition based on group selection and game theory
NASA Astrophysics Data System (ADS)
Ahmad, Foysal; Roy, Kaushik
2017-05-01
A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition
2006-06-01
Scientific Research. 5PAM-Crash is a trademark of the ESI Group . 6MATLAB and SIMULINK are registered trademarks of the MathWorks. 14 maneuvers...Laboratory (ARL) to develop methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots...methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots acting as a team to perform a
Rényi entropies after releasing the Néel state in the XXZ spin-chain
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Calabrese, Pasquale
2017-11-01
We study the Rényi entropies in the spin-1/2 anisotropic Heisenberg chain after a quantum quench starting from the Néel state. The quench action method allows us to obtain the stationary Rényi entropies for arbitrary values of the index α as generalised free energies evaluated over a calculable thermodynamic macrostate depending on α. We work out this macrostate for several values of α and of the anisotropy Δ by solving the thermodynamic Bethe ansatz equations. By varying α different regions of the Hamiltonian spectrum are accessed. The two extremes are α\\to∞ for which the thermodynamic macrostate is either the ground state or a low-lying excited state (depending on Δ) and α=0 when the macrostate is the infinite temperature state. The Rényi entropies are easily obtained from the macrostate as function of α and a few interesting limits are analytically characterised. We provide robust numerical evidence to confirm our results using exact diagonalisation and a stochastic numerical implementation of Bethe ansatz. Finally, using tDMRG we calculate the time evolution of the Rényi entanglement entropies. For large subsystems and for any α, their density turns out to be compatible with that of the thermodynamic Rényi entropies.
Robust MST-Based Clustering Algorithm.
Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing
2018-06-01
Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.
Vision based obstacle detection and grouping for helicopter guidance
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chatterji, Gano
1993-01-01
Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.
[Cluster analysis in biomedical researches].
Akopov, A S; Moskovtsev, A A; Dolenko, S A; Savina, G D
2013-01-01
Cluster analysis is one of the most popular methods for the analysis of multi-parameter data. The cluster analysis reveals the internal structure of the data, group the separate observations on the degree of their similarity. The review provides a definition of the basic concepts of cluster analysis, and discusses the most popular clustering algorithms: k-means, hierarchical algorithms, Kohonen networks algorithms. Examples are the use of these algorithms in biomedical research.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
Rodrigues, Anabela; Gomes, Manuela; Carrilho, Alexandre; Nunes, António Robalo; Orfão, Rosário; Alves, Ângela; Aguiar, José; Campos, Manuel
2014-01-01
Several clinical settings are associated with specific coagulopathies that predispose to uncontrolled bleeding. With the growing concern about the need for optimizing transfusion practices and improving treatment of the bleeding patient, a group of 9 Portuguese specialists (Share Network Group) was created to discuss and develop algorithms for the clinical evaluation and control of coagulopathic bleeding in the following perioperative clinical settings: surgery, trauma, and postpartum hemorrhage. The 3 algorithms developed by the group were presented at the VIII National Congress of the Associação Portuguesa de Imuno-hemoterapia in October 2013. They aim to provide a structured approach for clinicians to rapidly diagnose the status of coagulopathy in order to achieve an earlier and more effective bleeding control, reduce transfusion requirements, and improve patient outcomes. The group highlights the importance of communication between different specialties involved in the care of bleeding patients in order to achieve better results. PMID:25424528
Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta
2018-04-01
In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.
The pearls of using real-world evidence to discover social groups
NASA Astrophysics Data System (ADS)
Cardillo, Raymond A.; Salerno, John J.
2005-03-01
In previous work, we introduced a new paradigm called Uni-Party Data Community Generation (UDCG) and a new methodology to discover social groups (a.k.a., community models) called Link Discovery based on Correlation Analysis (LDCA). We further advanced this work by experimenting with a corpus of evidence obtained from a Ponzi scheme investigation. That work identified several UDCG algorithms, developed what we called "Importance Measures" to compare the accuracy of the algorithms based on ground truth, and presented a Concept of Operations (CONOPS) that criminal investigators could use to discover social groups. However, that work used a rather small random sample of manually edited documents because the evidence contained far too many OCR and other extraction errors. Deferring the evidence extraction errors allowed us to continue experimenting with UDCG algorithms, but only used a small fraction of the available evidence. In attempt to discover techniques that are more practical in the near-term, our most recent work focuses on being able to use an entire corpus of real-world evidence to discover social groups. This paper discusses the complications of extracting evidence, suggests a method of performing name resolution, presents a new UDCG algorithm, and discusses our future direction in this area.
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Nishida, Takahiro; Sonoda, Hiromichi; Oishi, Yasuhisa; Tanoue, Yoshihisa; Nakashima, Atsuhiro; Shiokawa, Yuichi; Tominaga, Ryuji
2014-04-01
The European System for Cardiac Operative Risk Evaluation (EuroSCORE) II was developed to improve the overestimation of surgical risk associated with the original (additive and logistic) EuroSCOREs. The purpose of this study was to evaluate the significance of the EuroSCORE II by comparing its performance with that of the original EuroSCOREs in Japanese patients undergoing surgery on the thoracic aorta. We have calculated the predicted mortalities according to the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms in 461 patients who underwent surgery on the thoracic aorta during a period of 20 years (1993-2013). The actual in-hospital mortality rates in the low- (additive EuroSCORE of 3-6), moderate- (7-11) and high-risk (≥11) groups (followed by overall mortality) were 1.3, 6.2 and 14.4% (7.2% overall), respectively. Among the three different risk groups, the expected mortality rates were 5.5 ± 0.6, 9.1 ± 0.7 and 13.5 ± 0.2% (9.5 ± 0.1% overall) by the additive EuroSCORE algorithm, 5.3 ± 0.1, 16 ± 0.4 and 42.4 ± 1.3% (19.9 ± 0.7% overall) by the logistic EuroSCORE algorithm and 1.6 ± 0.1, 5.2 ± 0.2 and 18.5 ± 1.3% (7.4 ± 0.4% overall) by the EuroSCORE II algorithm, indicating poor prediction (P < 0.0001) of the mortality in the high-risk group, especially by the logistic EuroSCORE. The areas under the receiver operating characteristic curves of the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms were 0.6937, 0.7169 and 0.7697, respectively. Thus, the mortality expected by the EuroSCORE II more closely matched the actual mortality in all three risk groups. In contrast, the mortality expected by the logistic EuroSCORE overestimated the risks in the moderate- (P = 0.0002) and high-risk (P < 0.0001) patient groups. Although all of the original EuroSCOREs and EuroSCORE II appreciably predicted the surgical mortality for thoracic aortic surgery in Japanese patients, the EuroSCORE II best predicted the mortalities in all risk groups.
Study of phase clustering method for analyzing large volumes of meteorological observation data
NASA Astrophysics Data System (ADS)
Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.
2017-11-01
The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.
An Island Grouping Genetic Algorithm for Fuzzy Partitioning Problems
Salcedo-Sanz, S.; Del Ser, J.; Geem, Z. W.
2014-01-01
This paper presents a novel fuzzy clustering technique based on grouping genetic algorithms (GGAs), which are a class of evolutionary algorithms especially modified to tackle grouping problems. Our approach hinges on a GGA devised for fuzzy clustering by means of a novel encoding of individuals (containing elements and clusters sections), a new fitness function (a superior modification of the Davies Bouldin index), specially tailored crossover and mutation operators, and the use of a scheme based on a local search and a parallelization process, inspired from an island-based model of evolution. The overall performance of our approach has been assessed over a number of synthetic and real fuzzy clustering problems with different objective functions and distance measures, from which it is concluded that the proposed approach shows excellent performance in all cases. PMID:24977235
Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth
2015-07-01
Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hirano, Jinichi; Watanabe, Koichiro; Suzuki, Takefumi; Uchida, Hiroyuki; Den, Ryosuke; Kishimoto, Taishiro; Nagasawa, Takashi; Tomita, Yusuke; Hara, Koichiro; Ochi, Hiromi; Kobayashi, Yoshimi; Ishii, Mutsuko; Fujita, Akane; Kanai, Yoshihiko; Goto, Megumi; Hayashi, Hiromi; Inamura, Kanako; Ooshima, Fumiko; Sumida, Mariko; Ozawa, Tomoko; Sekigawa, Kayoko; Nagaoka, Maki; Yoshimura, Kae; Konishi, Mika; Inagaki, Ataru; Saito, Takuya; Motohashi, Nobutaka; Mimura, Masaru; Okubo, Yoshiro; Kato, Motoichiro
2013-01-01
Objective The use of an algorithm may facilitate measurement-based treatment and result in more rational therapy. We conducted a 1-year, open-label study to compare various outcomes of algorithm-based treatment (ALGO) for schizophrenia versus treatment-as-usual (TAU), for which evidence has been very scarce. Methods In ALGO, patients with schizophrenia (Diagnostic and Statistical Manual of Mental Disorders, fourth edition) were treated with an algorithm consisting of a series of antipsychotic monotherapies that was guided by the total scores in the positive and negative syndrome scale (PANSS). When posttreatment PANSS total scores were above 70% of those at baseline in the first and second stages, or above 80% in the 3rd stage, patients proceeded to the next treatment stage with different antipsychotics. In contrast, TAU represented the best clinical judgment by treating psychiatrists. Results Forty-two patients (21 females, 39.0 ± 10.9 years-old) participated in this study. The baseline PANSS total score indicated the presence of severe psychopathology and was significantly higher in the ALGO group (n = 25; 106.9 ± 20.0) than in the TAU group (n = 17; 92.2 ± 18.3) (P = 0.021). As a result of treatment, there were no significant differences in the PANSS reduction rates, premature attrition rates, as well as in a variety of other clinical measures between the groups. Despite an effort to make each group unique in pharmacologic treatment, it was found that pharmacotherapy in the TAU group eventually became similar in quality to that of the ALGO group. Conclusion While the results need to be carefully interpreted in light of a hard-to-distinguish treatment manner between the two groups and more studies are necessary, algorithm-based antipsychotic treatments for schizophrenia compared well to treatment-as-usual in this study. PMID:24143104
Bahouth, Mona N; Power, Melinda C; Zink, Elizabeth K; Kozeniewski, Kate; Kumble, Sowmya; Deluzio, Sandra; Urrutia, Victor C; Stevens, Robert D
2018-06-01
To measure the impact of a progressive mobility program on patients admitted to a neurocritical critical care unit (NCCU) with intracerebral hemorrhage (ICH). The early mobilization of critically ill patients with spontaneous ICH is a challenge owing to the potential for neurologic deterioration and hemodynamic lability in the acute phase of injury. Patients admitted to the intensive care unit have been excluded from randomized trials of early mobilization after stroke. An interdisciplinary working group developed a formalized NCCU Mobility Algorithm that allocates patients to incremental passive or active mobilization pathways on the basis of level of consciousness and motor function. In a quasi-experimental consecutive group comparison, patients with ICH admitted to the NCCU were analyzed in two 6-month epochs, before and after rollout of the algorithm. Mobilization and safety endpoints were compared between epochs. NCCU in an urban, academic hospital. Adult patients admitted to the NCCU with primary intracerebral hemorrhage. Progressive mobilization after stroke using a formalized mobility algorithm. Time to first mobilization. The 2 groups of patients with ICH (pre-algorithm rolllout, n=28; post-algorithm rollout, n=29) were similar on baseline characteristics. Patients in the postintervention group were significantly more likely to undergo mobilization within the first 7 days after admission (odds ratio 8.7, 95% confidence interval 2.1, 36.6; P=.003). No neurologic deterioration, hypotension, falls, or line dislodgments were reported in association with mobilization. A nonsignificant difference in mortality was noted before and after rollout of the algorithm (4% vs 24%, respectively, P=.12). The implementation of a progressive mobility algorithm was safe and associated with a higher likelihood of mobilization in the first week after spontaneous ICH. Research is needed to investigate methods and the timing for the first mobilization in critically ill stroke patients. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Accurate and scalable social recommendation using mixed-membership stochastic block models.
Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta
2016-12-13
With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.
Accurate and scalable social recommendation using mixed-membership stochastic block models
Godoy-Lorite, Antonia; Moore, Cristopher
2016-01-01
With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773
The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.
Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris
2017-01-01
Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.
PGCA: An algorithm to link protein groups created from MS/MS data
Sasaki, Mayu; Hollander, Zsuzsanna; Smith, Derek; McManus, Bruce; McMaster, W. Robert; Ng, Raymond T.; Cohen Freue, Gabriela V.
2017-01-01
The quantitation of proteins using shotgun proteomics has gained popularity in the last decades, simplifying sample handling procedures, removing extensive protein separation steps and achieving a relatively high throughput readout. The process starts with the digestion of the protein mixture into peptides, which are then separated by liquid chromatography and sequenced by tandem mass spectrometry (MS/MS). At the end of the workflow, recovering the identity of the proteins originally present in the sample is often a difficult and ambiguous process, because more than one protein identifier may match a set of peptides identified from the MS/MS spectra. To address this identification problem, many MS/MS data processing software tools combine all plausible protein identifiers matching a common set of peptides into a protein group. However, this solution introduces new challenges in studies with multiple experimental runs, which can be characterized by three main factors: i) protein groups’ identifiers are local, i.e., they vary run to run, ii) the composition of each group may change across runs, and iii) the supporting evidence of proteins within each group may also change across runs. Since in general there is no conclusive evidence about the absence of proteins in the groups, protein groups need to be linked across different runs in subsequent statistical analyses. We propose an algorithm, called Protein Group Code Algorithm (PGCA), to link groups from multiple experimental runs by forming global protein groups from connected local groups. The algorithm is computationally inexpensive and enables the connection and analysis of lists of protein groups across runs needed in biomarkers studies. We illustrate the identification problem and the stability of the PGCA mapping using 65 iTRAQ experimental runs. Further, we use two biomarker studies to show how PGCA enables the discovery of relevant candidate protein group markers with similar but non-identical compositions in different runs. PMID:28562641
Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-01-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649
Confidence sharing: an economic strategy for efficient information flows in animal groups.
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-10-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.
Improving the Performance of AI Algorithms.
1987-09-01
favorably -6 influenced by s uch progranmning practices as the intellige +nt selt,(-rion .%V ’%. ot’ data formats; to) minimize th~e n,,-ed for...GROUP SUB-GROUP Artifcial Intelgence (Al) Algorithms, Improving Software .’ u- 12 05 Performance, Program Behavior, Predicting Performance, % 12 07...tions in communications, threat assessment, res(orce availability, and so forth. This need for intelligent and adaptable behavior indicates that the
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Fosse-Edorh, S; Rigou, A; Morin, S; Fezeu, L; Mandereau-Bruno, L; Fagot-Campagna, A
2017-10-01
Medico-administrative databases represent a very interesting source of information in the field of endocrine, nutritional and metabolic diseases. The objective of this article is to describe the early works of the Redsiam working group in this field. Algorithms developed in France in the field of diabetes, the treatment of dyslipidemia, precocious puberty, and bariatric surgery based on the National Inter-schema Information System on Health Insurance (SNIIRAM) data were identified and described. Three algorithms for identifying people with diabetes are available in France. These algorithms are based either on full insurance coverage for diabetes or on claims of diabetes treatments, or on the combination of these two methods associated with hospitalizations related to diabetes. Each of these algorithms has a different purpose, and the choice should depend on the goal of the study. Algorithms for identifying people treated for dyslipidemia or precocious puberty or who underwent bariatric surgery are also available. Early work from the Redsiam working group in the field of endocrine, nutritional and metabolic diseases produced an inventory of existing algorithms in France, linked with their goals, together with a presentation of their limitations and advantages, providing useful information for the scientific community. This work will continue with discussions about algorithms on the incidence of diabetes in children, thyroidectomy for thyroid nodules, hypothyroidism, hypoparathyroidism, and amyloidosis. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel; ...
2017-06-28
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less
Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J
2016-04-01
Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method
Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.
2007-01-01
Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281
Performance of blind source separation algorithms for fMRI analysis using a group ICA method.
Correa, Nicolle; Adali, Tülay; Calhoun, Vince D
2007-06-01
Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
Group implicit concurrent algorithms in nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Ortiz, M.; Sotelino, E. D.
1989-01-01
During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deptuch, G. W.; Fahim, F.; Grybos, P.
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less
GOClonto: an ontological clustering approach for conceptualizing PubMed abstracts.
Zheng, Hai-Tao; Borchert, Charles; Kim, Hong-Gee
2010-02-01
Concurrent with progress in biomedical sciences, an overwhelming of textual knowledge is accumulating in the biomedical literature. PubMed is the most comprehensive database collecting and managing biomedical literature. To help researchers easily understand collections of PubMed abstracts, numerous clustering methods have been proposed to group similar abstracts based on their shared features. However, most of these methods do not explore the semantic relationships among groupings of documents, which could help better illuminate the groupings of PubMed abstracts. To address this issue, we proposed an ontological clustering method called GOClonto for conceptualizing PubMed abstracts. GOClonto uses latent semantic analysis (LSA) and gene ontology (GO) to identify key gene-related concepts and their relationships as well as allocate PubMed abstracts based on these key gene-related concepts. Based on two PubMed abstract collections, the experimental results show that GOClonto is able to identify key gene-related concepts and outperforms the STC (suffix tree clustering) algorithm, the Lingo algorithm, the Fuzzy Ants algorithm, and the clustering based TRS (tolerance rough set) algorithm. Moreover, the two ontologies generated by GOClonto show significant informative conceptual structures.
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
NASA Astrophysics Data System (ADS)
Pagnuco, Inti A.; Pastore, Juan I.; Abras, Guillermo; Brun, Marcel; Ballarin, Virginia L.
2016-04-01
It is usually assumed that co-expressed genes suggest co-regulation in the underlying regulatory network. Determining sets of co-expressed genes is an important task, where significative groups of genes are defined based on some criteria. This task is usually performed by clustering algorithms, where the whole family of genes, or a subset of them, are clustered into meaningful groups based on their expression values in a set of experiment. In this work we used a methodology based on the Silhouette index as a measure of cluster quality for individual gene groups, and a combination of several variants of hierarchical clustering to generate the candidate groups, to obtain sets of co-expressed genes for two real data examples. We analyzed the quality of the best ranked groups, obtained by the algorithm, using an online bioinformatics tool that provides network information for the selected genes. Moreover, to verify the performance of the algorithm, considering the fact that it doesn’t find all possible subsets, we compared its results against a full search, to determine the amount of good co-regulated sets not detected.
Evolvable Hardware for Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William
2004-01-01
This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
NASA Astrophysics Data System (ADS)
Arai, Tatsuya; Lee, Kichang; Stenger, Michael B.; Platts, Steven H.; Meck, Janice V.; Cohen, Richard J.
2011-04-01
Orthostatic intolerance (OI) is a significant challenge for astronauts after long-duration spaceflight. Depending on flight duration, 20-80% of astronauts suffer from post-flight OI, which is associated with reduced vascular resistance. This paper introduces a novel algorithm for continuously monitoring changes in total peripheral resistance (TPR) by processing the peripheral arterial blood pressure (ABP). To validate, we applied our novel mathematical algorithm to the pre-flight ABP data previously recorded from twelve astronauts ten days before launch. The TPR changes were calculated by our algorithm and compared with the TPR value estimated using cardiac output/heart rate before and after phenylephrine administration. The astronauts in the post-flight presyncopal group had lower pre-flight TPR changes (1.66 times) than those in the non-presyncopal group (2.15 times). The trend in TPR changes calculated with our algorithm agreed with the TPR trend calculated using measured cardiac output in the previous study. Further data collection and algorithm refinement are needed for pre-flight detection of OI and monitoring of continuous TPR by analysis of peripheral arterial blood pressure.
Feng, Cui; Zhu, Di; Zou, Xianlun; Li, Anqin; Hu, Xuemei; Li, Zhen; Hu, Daoyu
2018-03-01
To investigate the subjective and quantitative image quality and radiation exposure of CT enterography (CTE) examination performed at low tube voltage and low concentration of contrast agent with adaptive statistical iterative reconstruction (ASIR) algorithm, compared with conventional CTE.One hundred thirty-seven patients with suspected or proved gastrointestinal diseases underwent contrast enhanced CTE in a multidetector computed tomography (MDCT) scanner. All cases were assigned to 2 groups. Group A (n = 79) underwent CT with low tube voltage based on patient body mass index (BMI) (BMI < 23 kg/m, 80 kVp; BMI ≥ 23 kg/m, 100 kVp) and low concentration of contrast agent (270 mg I/mL), the images were reconstructed with standard filtered back projection (FBP) algorithm and 50% ASIR algorithm. Group B (n = 58) underwent conventional CTE with 120 kVp and 350 mg I/mL contrast agent, the images were reconstructed with FBP algorithm. The computed tomography dose index volume (CTDIvol), dose length product (DLP), effective dose (ED), and total iodine dosage were calculated and compared. The CT values, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) of the normal bowel wall, gastrointestinal lesions, and mesenteric vessels were assessed and compared. The subjective image quality was assessed independently and blindly by 2 radiologists using a 5-point Likert scale.The differences of values for CTDIvol (8.64 ± 2.72 vs 11.55 ± 3.95, P < .001), ED (6.34 ± 2.24 vs 8.52 ± 3.02, P < .001), and DLP (422.6 ± 149.40 vs 568.30 ± 213.90, P < .001) were significant between group A and group B, with a reduction of 25.2%, 25.7%, and 25.7% in group A, respectively. The total iodine dosage in group A was reduced by 26.1%. The subjective image quality did not differ between the 2 groups (P > .05) and all image quality scores were greater than or equal to 3 (moderate). Fifty percent ASIR-A group images provided lower image noise, but similar or higher quantitative image quality in comparison with FBP-B group images.Compared with the conventional protocol, CTE performed at low tube voltage, low concentration of contrast agent with 50% ASIR algorithm produce a diagnostically acceptable image quality with a mean ED of 6.34 mSv and a total iodine dose reduction of 26.1%.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bu, Yanlong; Zhang, Qiang; Ding, Chibiao; Tang, Geshi; Wang, Hang; Qiu, Rujin; Liang, Libo; Yin, Hejun
2017-02-01
This paper presents an interplanetary optical navigation algorithm based on two spherical celestial bodies. The remarkable characteristic of the method is that key navigation parameters can be estimated depending entirely on known sizes and ephemerides of two celestial bodies, especially positioning is realized through a single image and does not rely on traditional terrestrial radio tracking any more. Actual Earth-Moon group photos captured by China's Chang'e-5T1 probe were used to verify the effectiveness of the algorithm. From 430,000 km away from the Earth, the camera pointing accuracy reaches 0.01° (one sigma) and the inertial positioning error is less than 200 km, respectively; meanwhile, the cost of the ground control and human resources are greatly reduced. The algorithm is flexible, easy to implement, and can provide reference to interplanetary autonomous navigation in the solar system.
Predicting Recovery Potential for Individual Stroke Patients Increases Rehabilitation Efficiency.
Stinear, Cathy M; Byblow, Winston D; Ackerley, Suzanne J; Barber, P Alan; Smith, Marie-Claire
2017-04-01
Several clinical measures and biomarkers are associated with motor recovery after stroke, but none are used to guide rehabilitation for individual patients. The objective of this study was to evaluate the implementation of upper limb predictions in stroke rehabilitation, by combining clinical measures and biomarkers using the Predict Recovery Potential (PREP) algorithm. Predictions were provided for patients in the implementation group (n=110) and withheld from the comparison group (n=82). Predictions guided rehabilitation therapy focus for patients in the implementation group. The effects of predictive information on clinical practice (length of stay, therapist confidence, therapy content, and dose) were evaluated. Clinical outcomes (upper limb function, impairment and use, independence, and quality of life) were measured 3 and 6 months poststroke. The primary clinical practice outcome was inpatient length of stay. The primary clinical outcome was Action Research Arm Test score 3 months poststroke. Length of stay was 1 week shorter for the implementation group (11 days; 95% confidence interval, 9-13 days) than the comparison group (17 days; 95% confidence interval, 14-21 days; P =0.001), controlling for upper limb impairment, age, sex, and comorbidities. Therapists were more confident ( P =0.004) and modified therapy content according to predictions for the implementation group ( P <0.05). The algorithm correctly predicted the primary clinical outcome for 80% of patients in both groups. There were no adverse effects of algorithm implementation on patient outcomes at 3 or 6 months poststroke. PREP algorithm predictions modify therapy content and increase rehabilitation efficiency after stroke without compromising clinical outcome. URL: http://anzctr.org.au. Unique identifier: ACTRN12611000755932. © 2017 American Heart Association, Inc.
Adaptive Control Allocation in the Presence of Actuator Failures
NASA Technical Reports Server (NTRS)
Liu, Yu; Crespo, Luis G.
2010-01-01
In this paper, a novel adaptive control allocation framework is proposed. In the adaptive control allocation structure, cooperative actuators are grouped and treated as an equivalent control effector. A state feedback adaptive control signal is designed for the equivalent effector and allocated to the member actuators adaptively. Two adaptive control allocation algorithms are proposed, which guarantee closed-loop stability and asymptotic state tracking in the presence of uncertain loss of effectiveness and constant-magnitude actuator failures. The proposed algorithms can be shown to reduce the controller complexity with proper grouping of the actuators. The proposed adaptive control allocation schemes are applied to two linearized aircraft models, and the simulation results demonstrate the performance of the proposed algorithms.
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor
2016-11-01
and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C
A new algorithm to create balanced teams promoting more diversity
NASA Astrophysics Data System (ADS)
Dias, Teresa Galvão; Borges, José
2017-11-01
The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose an alternative algorithm in which the within group heterogeneity is measured by the attributes' variance instead of by the sum of distances between group members. The proposed algorithm is evaluated by means of two real data sets and the results suggest that it induces better solutions according to two independent evaluation criteria, the Davies-Bouldin index and the number of dominated teams. In conclusion, the results show that it is more adequate to use the attributes' variance to measure the heterogeneity of profiles within the teams and the homogeneity among teams.
Image-driven Population Analysis through Mixture Modeling
Sabuncu, Mert R.; Balci, Serdar K.; Shenton, Martha E.; Golland, Polina
2009-01-01
We present iCluster, a fast and efficient algorithm that clusters a set of images while co-registering them using a parameterized, nonlinear transformation model. The output of the algorithm is a small number of template images that represent different modes in a population. This is in contrast with traditional, hypothesis-driven computational anatomy approaches that assume a single template to construct an atlas. We derive the algorithm based on a generative model of an image population as a mixture of deformable template images. We validate and explore our method in four experiments. In the first experiment, we use synthetic data to explore the behavior of the algorithm and inform a design choice on parameter settings. In the second experiment, we demonstrate the utility of having multiple atlases for the application of localizing temporal lobe brain structures in a pool of subjects that contains healthy controls and schizophrenia patients. Next, we employ iCluster to partition a data set of 415 whole brain MR volumes of subjects aged 18 through 96 years into three anatomical subgroups. Our analysis suggests that these subgroups mainly correspond to age groups. The templates reveal significant structural differences across these age groups that confirm previous findings in aging research. In the final experiment, we run iCluster on a group of 15 patients with dementia and 15 age-matched healthy controls. The algorithm produces two modes, one of which contains dementia patients only. These results suggest that the algorithm can be used to discover sub-populations that correspond to interesting structural or functional “modes.” PMID:19336293
Community detection using preference networks
NASA Astrophysics Data System (ADS)
Tasgin, Mursel; Bingol, Haluk O.
2018-04-01
Community detection is the task of identifying clusters or groups of nodes in a network where nodes within the same group are more connected with each other than with nodes in different groups. It has practical uses in identifying similar functions or roles of nodes in many biological, social and computer networks. With the availability of very large networks in recent years, performance and scalability of community detection algorithms become crucial, i.e. if time complexity of an algorithm is high, it cannot run on large networks. In this paper, we propose a new community detection algorithm, which has a local approach and is able to run on large networks. It has a simple and effective method; given a network, algorithm constructs a preference network of nodes where each node has a single outgoing edge showing its preferred node to be in the same community with. In such a preference network, each connected component is a community. Selection of the preferred node is performed using similarity based metrics of nodes. We use two alternatives for this purpose which can be calculated in 1-neighborhood of nodes, i.e. number of common neighbors of selector node and its neighbors and, the spread capability of neighbors around the selector node which is calculated by the gossip algorithm of Lind et.al. Our algorithm is tested on both computer generated LFR networks and real-life networks with ground-truth community structure. It can identify communities accurately in a fast way. It is local, scalable and suitable for distributed execution on large networks.
MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A
2015-08-21
To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hoffmann, S
1992-12-01
A prospective evaluation was made of an algorithm for a selective use of throat swabs in patients with sore throat in general practice. The algorithm states that a throat swab should be obtained (a) in all children younger than 15 years; (b) in patients aged 15 years or more who have pain on swallowing and at least three of four signs (enlarged or hyperaemic tonsils; exudate; enlarged or tender angular lymph nodes; and a temperature > or = 38 degrees C); and (c) in adults aged 15-44 years with pain on swallowing and one or two of the four signs, but not both cough and coryza. Group A streptococci were found by laboratory culture in 30% of throat swabs from 1783 patients. Using these results as the reference, the algorithm was 95% sensitive and 26% specific, and assigned 80% of the patients to be swabbed. Its positive and negative predictive values in this setting were 36% and 92%, respectively. It is concluded that this algorithm may be useful in general practice.
Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2013-01-01
The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.
NASA Astrophysics Data System (ADS)
Pekker, David; Clark, Bryan K.; Oganesyan, Vadim; Refael, Gil; Tian, Binbin
Many-body localization is a dynamical phase of matter that is characterized by the absence of thermalization. One of the key characteristics of many-body localized systems is the emergence of a large (possibly maximal) number of local integrals of motion (local quantum numbers) and corresponding conserved quantities. We formulate a robust algorithm for identifying these conserved quantities, based on Wegner's flow equations - a form of the renormalization group that works by disentangling the degrees of freedom of the system as opposed to integrating them out. We test our algorithm by explicit numerical comparison with more engineering based algorithms - Jacobi rotations and bi-partite matching. We find that the Wegner flow algorithm indeed produces the more local conserved quantities and is therefore more optimal. A preliminary analysis of the conserved quantities produced by the Wegner flow algorithm reveals the existence of at least two different localization lengthscales. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.
Implementation of spectral clustering on microarray data of carcinoma using k-means algorithm
NASA Astrophysics Data System (ADS)
Frisca, Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Clustering is one of data analysis methods that aims to classify data which have similar characteristics in the same group. Spectral clustering is one of the most popular modern clustering algorithms. As an effective clustering technique, spectral clustering method emerged from the concepts of spectral graph theory. Spectral clustering method needs partitioning algorithm. There are some partitioning methods including PAM, SOM, Fuzzy c-means, and k-means. Based on the research that has been done by Capital and Choudhury in 2013, when using Euclidian distance k-means algorithm provide better accuracy than PAM algorithm. So in this paper we use k-means as our partition algorithm. The major advantage of spectral clustering is in reducing data dimension, especially in this case to reduce the dimension of large microarray dataset. Microarray data is a small-sized chip made of a glass plate containing thousands and even tens of thousands kinds of genes in the DNA fragments derived from doubling cDNA. Application of microarray data is widely used to detect cancer, for the example is carcinoma, in which cancer cells express the abnormalities in his genes. The purpose of this research is to classify the data that have high similarity in the same group and the data that have low similarity in the others. In this research, Carcinoma microarray data using 7457 genes. The result of partitioning using k-means algorithm is two clusters.
NASA Astrophysics Data System (ADS)
Rahman Syahputra, Edy; Agustina Dalimunthe, Yulia; Irvan
2017-12-01
Many students are confused in choosing their own field of specialization, ultimately choosing areas of specialization that are incompatible with a variety of reasons such as just following a friend or because of the area of interest of many choices without knowing whether they have Competencies in the chosen field of interest. This research aims to apply Clustering method with Fuzzy C-means algorithm to classify students in the chosen interest field. The Fuzzy C-Means algorithm is one of the easiest and often used algorithms in data grouping techniques because it makes efficient estimates and does not require many parameters. Several studies have led to the conclusion that the Fuzzy C-Means algorithm can be used to group data based on certain attributes. In this research will be used Fuzzy C-Means algorithm to classify student data based on the value of core subjects in the selection of specialization field. This study also tested the accuracy of the Fuzzy C-Means algorithm in the determination of interest area. The study was conducted on the STT-Harapan Medan Information System Study program, and the object of research is the value of all students of STT-Harapan Medan Information System Study Program 2012. From this research, it is expected to get the specialization field, according to the students' ability based on the prerequisite principal value.
Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J; Stevens, Craig W; Kim, Jongphil; Yue, Binglin; Demarco, Marylou; Zhang, Geoffrey G; Moros, Eduardo G; Feygelman, Vladimir
2014-04-01
Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99GITV = 7.4 Gy, ΔD99PTV = 10.4 Gy, ΔV90GITV = 13.7%, ΔV90PTV = 37.6%, ΔD95PTV = 9.8 Gy, and ΔDISO = 3.4 Gy. GITV = gross internal tumor volume. Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm. Copyright © 2014 Elsevier Inc. All rights reserved.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Park, S W; Bebakar, W M W; Hernandez, P G; Macura, S; Hersløv, M L; de la Rosa, R
2017-02-01
To compare the efficacy and safety of two titration algorithms for insulin degludec/insulin aspart (IDegAsp) administered once daily with metformin in participants with insulin-naïve Type 2 diabetes mellitus. This open-label, parallel-group, 26-week, multicentre, treat-to-target trial, randomly allocated participants (1:1) to two titration arms. The Simple algorithm titrated IDegAsp twice weekly based on a single pre-breakfast self-monitored plasma glucose (SMPG) measurement. The Stepwise algorithm titrated IDegAsp once weekly based on the lowest of three consecutive pre-breakfast SMPG measurements. In both groups, IDegAsp once daily was titrated to pre-breakfast plasma glucose values of 4.0-5.0 mmol/l. Primary endpoint was change from baseline in HbA 1c (%) after 26 weeks. Change in HbA 1c at Week 26 was IDegAsp Simple -14.6 mmol/mol (-1.3%) (to 52.4 mmol/mol; 6.9%) and IDegAsp Stepwise -11.9 mmol/mol (-1.1%) (to 54.7 mmol/mol; 7.2%). The estimated between-group treatment difference was -1.97 mmol/mol [95% confidence interval (CI) -4.1, 0.2] (-0.2%, 95% CI -0.4, 0.02), confirming the non-inferiority of IDegAsp Simple to IDegAsp Stepwise (non-inferiority limit of ≤ 0.4%). Mean reduction in fasting plasma glucose and 8-point SMPG profiles were similar between groups. Rates of confirmed hypoglycaemia were lower for IDegAsp Stepwise [2.1 per patient years of exposure (PYE)] vs. IDegAsp Simple (3.3 PYE) (estimated rate ratio IDegAsp Simple /IDegAsp Stepwise 1.8; 95% CI 1.1, 2.9). Nocturnal hypoglycaemia rates were similar between groups. No severe hypoglycaemic events were reported. In participants with insulin-naïve Type 2 diabetes mellitus, the IDegAsp Simple titration algorithm improved HbA 1c levels as effectively as a Stepwise titration algorithm. Hypoglycaemia rates were lower in the Stepwise arm. © 2016 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.
Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Parks, Nancy A; Maish, George O; Shahan, Charles P; Fabian, Timothy C; Croce, Martin A
2012-04-01
Our previous experience with colon injuries suggested that operative decisions based on a defined algorithm improve outcomes. The purpose of this study was to evaluate the validity of this algorithm in the face of an increased incidence of destructive injuries observed in recent years. Consecutive patients with full-thickness penetrating colon injuries over an 8-year period were evaluated. Per algorithm, patients with nondestructive injuries underwent primary repair. Those with destructive wounds underwent resection plus anastomosis in the absence of comorbidities or large pre- or intraoperative transfusion requirements (more than 6 units packed RBCs); otherwise they were diverted. Outcomes from the current study (CS group) were compared with those from the previous study (PS group). There were 252 patients who had full-thickness penetrating colon injuries: 150 (60%) patients had nondestructive colon wounds treated with primary repair and 102 patients (40%) had destructive wounds (CS). Demographics and intraoperative transfusions were similar between CS and PS groups. Of the 102 patients with destructive injuries, 75% underwent resection plus anastomosis and 25% underwent diversion. Despite more destructive injuries managed in the CS group (41% vs 27%), abscess rate (18% vs 27%) and colon-related mortality (1% vs 5%) were lower in the CS. Suture line failure was similar in CS compared with PS (5% vs 7%). Adherence to the algorithm was >90% in the CS (similar to PS). Despite an increase in the incidence of destructive colon injuries, our management algorithm remains valid. Destructive injuries associated with pre- or intraoperative transfusion requirements of more than 6 units packed RBCs and/or significant comorbidities are best managed with diversion. By managing the majority of other destructive injuries with resection plus anastomosis, acceptably low morbidity and mortality can be achieved. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Color transfer algorithm in medical images
NASA Astrophysics Data System (ADS)
Wang, Weihong; Xu, Yangfa
2007-12-01
In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.
Parallel Algorithms for Computational Models of Geophysical Systems
NASA Astrophysics Data System (ADS)
Carrillo Ledesma, A.; Herrera, I.; de la Cruz, L. M.; Hernández, G.; Grupo de Modelacion Matematica y Computacional
2013-05-01
Mathematical models of many systems of interest, including very important continuous systems of Earth Sciences and Engineering, lead to a great variety of partial differential equations (PDEs) whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by scientific and engineering applications. Parallel computing is outstanding among the new computational tools and, in order to effectively use the most advanced computers available today, massively parallel software is required. Domain decomposition methods (DDMs) have been developed precisely for effectively treating PDEs in paralle. Ideally, the main objective of domain decomposition research is to produce algorithms capable of 'obtaining the global solution by exclusively solving local problems', but up-to-now this has only been an aspiration; that is, a strong desire for achieving such a property and so we call it 'the DDM-paradigm'. In recent times, numerically competitive DDM-algorithms are non-overlapping, preconditioned and necessarily incorporate constraints which pose an additional challenge for achieving the DDM-paradigm. Recently a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm, was developed. To derive them a new discretization method, which uses a non-overlapping system of nodes (the derived-nodes), was introduced. This discretization procedure can be applied to any boundary-value problem, or system of such equations. In turn, the resulting system of discrete equations can be treated using any available DDM-algorithm. In particular, two of the four DVS-algorithms mentioned above were obtained by application of the well-known and very effective algorithms BDDC and FETI-DP; these will be referred to as the DVS-BDDC and DVS-FETI-DP algorithms. The other two, which will be referred to as the DVS-PRIMAL and DVS-DUAL algorithms, were obtained by application of two new algorithms that had not been previously reported in the literature. As said before, the four DVS-algorithms constitute a group of preconditioned and constrained algorithms that, for the first time, fulfill the DDM-paradigm. Both, BDDC and FETI-DP, are very well-known; and both are highly efficient. Recently, it was established that these two methods are closely related and its numerical performance is quite similar. On the other hand, through numerical experiments, we have established that the numerical performances of each one of the members of DVS-algorithms group (DVS-BDDC, DVS-FETI-DP, DVS-PRIMAL and DVS-DUAL) are very similar too. Furthermore, we have carried out comparisons of the performances of the standard versions of BDDC and FETI-DP with DVS-BDDC and DVS-FETI-DP, and in all such numerical experiments the DVS algorithms have performed significantly better.
Classifying Imbalanced Data Streams via Dynamic Feature Group Weighting with Importance Sampling.
Wu, Ke; Edwards, Andrea; Fan, Wei; Gao, Jing; Zhang, Kun
2014-04-01
Data stream classification and imbalanced data learning are two important areas of data mining research. Each has been well studied to date with many interesting algorithms developed. However, only a few approaches reported in literature address the intersection of these two fields due to their complex interplay. In this work, we proposed an importance sampling driven, dynamic feature group weighting framework (DFGW-IS) for classifying data streams of imbalanced distribution. Two components are tightly incorporated into the proposed approach to address the intrinsic characteristics of concept-drifting, imbalanced streaming data. Specifically, the ever-evolving concepts are tackled by a weighted ensemble trained on a set of feature groups with each sub-classifier (i.e. a single classifier or an ensemble) weighed by its discriminative power and stable level. The un-even class distribution, on the other hand, is typically battled by the sub-classifier built in a specific feature group with the underlying distribution rebalanced by the importance sampling technique. We derived the theoretical upper bound for the generalization error of the proposed algorithm. We also studied the empirical performance of our method on a set of benchmark synthetic and real world data, and significant improvement has been achieved over the competing algorithms in terms of standard evaluation metrics and parallel running time. Algorithm implementations and datasets are available upon request.
A Novel Hybrid Firefly Algorithm for Global Optimization.
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.
A Novel Hybrid Firefly Algorithm for Global Optimization
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
2016-01-01
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Novel E-Field Sensor for Projectile Detection
2012-10-22
aircrafts. They used an array of three plate induction sensors and a simple algorithm to deter mine the direction of the planes [9]. In more recent...publications [10, 11, 12] researchers present increasingly more advanced algorithms and sensors. The techniques developed thus far have not received...the electric field pulse is being detected by a group of sensors in array with known distances between the sensors, so triangulation algorithms could
2001-06-01
Algorithms for Vertical Fin Buffeting Using Strain Actuation DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the...UNCLASSIFIED 8-1 Finite Element Approach for the Design of Control Algorithms for Vertical Fin Buffeting Using Strain Actuation Fred Nitzsche...groups), the disturbance (buffet load), and the two output variables (a choice among four Introduction accelerometers and five strain - gauge positions
Epstein, Richard H; Dexter, Franklin
2017-07-01
Comorbidity adjustment is often performed during outcomes and health care resource utilization research. Our goal was to develop an efficient algorithm in structured query language (SQL) to determine the Elixhauser comorbidity index. We wrote an SQL algorithm to calculate the Elixhauser comorbidities from Diagnosis Related Group and International Classification of Diseases (ICD) codes. Validation was by comparison to expected comorbidities from combinations of these codes and to the 2013 Nationwide Readmissions Database (NRD). The SQL algorithm matched perfectly with expected comorbidities for all combinations of ICD-9 or ICD-10, and Diagnosis Related Groups. Of 13 585 859 evaluable NRD records, the algorithm matched 100% of the listed comorbidities. Processing time was ∼0.05 ms/record. The SQL Elixhauser code was efficient and computationally identical to the SAS algorithm used for the NRD. This algorithm may be useful where preprocessing of large datasets in a relational database environment and comorbidity determination is desired before statistical analysis. A validated SQL procedure to calculate Elixhauser comorbidities and the van Walraven index from ICD-9 or ICD-10 discharge diagnosis codes has been published. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.
Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming
2010-06-01
To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010
Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios
2016-01-01
Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.
Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi
2017-05-04
Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.
Muir, Susan W; Berg, Katherine; Chesworth, Bert; Klar, Neil; Speechley, Mark
2010-01-01
Evaluate the ability of the American and British Geriatrics Society fall prevention guideline's screening algorithm to identify and stratify future fall risk in community-dwelling older adults. Prospective cohort of community-dwelling older adults (n = 117) aged 65 to 90 years. Fall history, balance, and gait measured during a comprehensive geriatric assessment at baseline. Falls data were collected monthly for 1 year. The outcomes of any fall and any injurious fall were evaluated. The algorithm stratified participants into 4 hierarchal risk categories. Fall risk was 33% and 68% for the "no intervention" and "comprehensive fall evaluation required" groups respectively. The relative risk estimate for falling comparing participants in the 2 intervention groups was 2.08 (95% CI 1.42-3.05) for any fall and 2.60 (95% Cl 1.53-4.42) for any injurious fall. Prognostic accuracy values were: sensitivity of 0.50 (95% Cl 0.36-0.64) and specificity of 0.82 (95% CI 0.70-0.90) for any fall; and sensitivity of 0.56 (95% CI 0.38-0.72) and specificity of 0.78 (95% Cl 0.67-0.86) for any injurious fall. The algorithm was able to identify and stratify fall risk for each fall outcome, though the values of prognostic accuracy demonstrate moderate clinical utility. The recommendations of fall evaluation for individuals in the highest risk groups appear supported though the recommendation of no intervention in the lowest risk groups may not address their needs for fall prevention interventions. Further evaluation of the algorithm is recommended to refine the identification of fall risk in community-dwelling older adults.
Bjoerke-Bertheussen, Jeanette; Schoeyen, Helle; Andreassen, Ole A; Malt, Ulrik F; Oedegaard, Ketil J; Morken, Gunnar; Sundet, Kjetil; Vaaler, Arne E; Auestad, Bjoern; Kessler, Ute
2017-12-21
Electroconvulsive therapy is an effective treatment for bipolar depression, but there are concerns about whether it causes long-term neurocognitive impairment. In this multicenter randomized controlled trial, in-patients with treatment-resistant bipolar depression were randomized to either algorithm-based pharmacologic treatment or right unilateral electroconvulsive therapy. After the 6-week treatment period, all of the patients received maintenance pharmacotherapy as recommended by their clinician guided by a relevant treatment algorithm. Patients were assessed at baseline and at 6 months. Neurocognitive functions were assessed using the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) Consensus Cognitive Battery, and autobiographical memory consistency was assessed using the Autobiographical Memory Interview-Short Form. Seventy-three patients entered the trial, of whom 51 and 26 completed neurocognitive assessments at baseline and 6 months, respectively. The MATRICS Consensus Cognitive Battery composite score improved by 4.1 points in both groups (P = .042) from baseline to 6 months (from 40.8 to 44.9 and from 41.9 to 46.0 in the algorithm-based pharmacologic treatment and electroconvulsive therapy groups, respectively). The Autobiographical Memory Interview-Short Form consistency scores were reduced in both groups (72.3% vs 64.3% in the algorithm-based pharmacologic treatment and electroconvulsive therapy groups, respectively; P = .085). This study did not find that right unilateral electroconvulsive therapy caused long-term impairment in neurocognitive functions compared to algorithm-based pharmacologic treatment in bipolar depression as measured using standard neuropsychological tests, but due to the low number of patients in the study the results should be interpreted with caution. ClinicalTrials.gov: NCT00664976. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.
Mei, Gang; Xu, Nengxiong; Xu, Liangliang
2016-01-01
This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.
2007-04-01
This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.
NASA Technical Reports Server (NTRS)
Koshak, William J.
2010-01-01
This viewgraph presentation describes the significant progress made in the flash-type discrimination algorithm development. The contents include: 1) Highlights of Progress for GLM-R3 Flash-Type discrimination Algorithm Development; 2) Maximum Group Area (MGA) Data; 3) Retrieval Errors from Simulations; and 4) Preliminary Global-scale Retrieval.
Koller, Tomas; Kollerova, Jana; Huorka, Martin; Meciarova, Iveta; Payer, Juraj
2014-10-01
Staging for liver fibrosis is recommended in the management of hepatitis C as an argument for treatment priority. Our aim was to construct a noninvasive algorithm to predict the significant liver fibrosis (SLF) using common biochemical markers and compare it with some existing models. The study group included 104 consecutive cases; SLF was defined as Ishak fibrosis stage greater than 2. The patient population was assigned randomly to the training and the validation groups of 52 cases each. The training group was used to construct the algorithm from parameters with the best predictive value. Each parameter was assigned a score that was added to the noninvasive fibrosis score (NFS). The accuracy of NFS in predicting SLF was tested in the validation group and compared with APRI, FIB4, and Forns models. Our algorithm used age, alkaline phosphatase, ferritin, APRI, α2 macroglobulin, and insulin and the NFS ranged from -4 to 5. The probability of SLF was 2.6 versus 77.1% in NFS<0 and NFS>0, leaving NFS=0 in a gray zone (29.8% of cases). The area under the receiver operating curve was 0.895 and 0.886, with a specificity, sensitivity, and diagnostic accuracy of 85.1, 92.3, and 87.5% versus 77.8, 100, and 87.9% for the training and the validation group. In comparison, the area under the receiver operating curve for APRI=0.810, FIB4=0.781, and Forns=0.703 with a diagnostic accuracy of 83.9, 72.3, and 62% and gray zone cases in 46.15, 37.5, and 44.2%. We devised an algorithm to calculate the NFS to predict SLF with good accuracy, fewer cases in the gray zone, and a straightforward clinical interpretation. NFS could be used for the initial evaluation of the treatment priority.
Ducie, Jennifer A; Eriksson, Ane Gerda Zahl; Ali, Narisha; McGree, Michaela E; Weaver, Amy L; Bogani, Giorgio; Cliby, William A; Dowdy, Sean C; Bakkum-Gamez, Jamie N; Soslow, Robert A; Keeney, Gary L; Abu-Rustum, Nadeem R; Mariani, Andrea; Leitao, Mario M
2017-12-01
To determine if a sentinel lymph node (SLN) mapping algorithm will detect metastatic nodal disease in patients with intermediate-/high-risk endometrial carcinoma. Patients were identified and surgically staged at two collaborating institutions. The historical cohort (2004-2008) at one institution included patients undergoing complete pelvic and paraaortic lymphadenectomy to the renal veins (LND cohort). At the second institution an SLN mapping algorithm, including pathologic ultra-staging, was performed (2006-2013) (SLN cohort). Intermediate-risk was defined as endometrioid histology (any grade), ≥50% myometrial invasion; high-risk as serous or clear cell histology (any myometrial invasion). Patients with gross peritoneal disease were excluded. Isolated tumor cells, micro-metastases, and macro-metastases were considered node-positive. We identified 210 patients in the LND cohort, 202 in the SLN cohort. Nodal assessment was performed for most patients. In the intermediate-risk group, stage IIIC disease was diagnosed in 30/107 (28.0%) (LND), 29/82 (35.4%) (SLN) (P=0.28). In the high-risk group, stage IIIC disease was diagnosed in 20/103 (19.4%) (LND), 26 (21.7%) (SLN) (P=0.68). Paraaortic lymph node (LN) assessment was performed significantly more often in intermediate-/high-risk groups in the LND cohort (P<0.001). In the intermediate-risk group, paraaortic LN metastases were detected in 20/96 (20.8%) (LND) vs. 3/28 (10.7%) (SLN) (P=0.23). In the high-risk group, paraaortic LN metastases were detected in 13/82 (15.9%) (LND) and 10/56 (17.9%) (SLN) (%, P=0.76). SLN mapping algorithm provides similar detection rates of stage IIIC endometrial cancer. The SLN algorithm does not compromise overall detection compared to standard LND. Copyright © 2017 Elsevier Inc. All rights reserved.
Modified kernel-based nonlinear feature extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Perkins, S. J.; Theiler, J. P.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determinedmore » by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.« less
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration
NASA Astrophysics Data System (ADS)
Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.
2017-03-01
In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.
A Computerized Decision Support System for Depression in Primary Care
Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha
2009-01-01
Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065
A computerized decision support system for depression in primary care.
Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha
2009-01-01
In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.
Behavioral Modeling for Mental Health using Machine Learning Algorithms.
Srividya, M; Mohanavalli, S; Bhalaji, N
2018-04-03
Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.
Tan, Siok Swan; Chiarello, Pietro; Quentin, Wilm
2013-11-01
Researchers from 11 countries (Austria, England, Estonia, Finland, France, Germany, Ireland, Netherlands, Poland, Spain, and Sweden) compared how their Diagnosis-Related Group (DRG) systems deal with knee replacement cases. The study aims to assist knee surgeons and national authorities to optimize the grouping algorithm of their DRG systems. National or regional databases were used to identify hospital cases treated with a procedure of knee replacement. DRG classification algorithms and indicators of resource consumption were compared for those DRGs that together comprised at least 97 % of cases. Five standardized case scenarios were defined and quasi-prices according to national DRG-based hospital payment systems ascertained. Grouping algorithms for knee replacement vary widely across countries: they classify cases according to different variables (between one and five classification variables) into diverging numbers of DRGs (between one and five DRGs). Even the most expensive DRGs generally have a cost index below 2.00, implying that grouping algorithms do not adequately account for cases that are more than twice as costly as the index DRG. Quasi-prices for the most complex case vary between euro 4,920 in Estonia and euro 14,081 in Spain. Most European DRG systems were observed to insufficiently consider the most important determinants of resource consumption. Several countries' DRG system might be improved through the introduction of classification variables for revision of knee replacement or for the presence of complications or comorbidities. Ultimately, this would contribute to assuring adequate performance comparisons and fair hospital reimbursement on the basis of DRGs.
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Research on retailer data clustering algorithm based on Spark
NASA Astrophysics Data System (ADS)
Huang, Qiuman; Zhou, Feng
2017-03-01
Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.
Warfarin Pharmacogenomics in Diverse Populations.
Kaye, Justin B; Schultz, Lauren E; Steiner, Heidi E; Kittles, Rick A; Cavallari, Larisa H; Karnes, Jason H
2017-09-01
Genotype-guided warfarin dosing algorithms are a rational approach to optimize warfarin dosing and potentially reduce adverse drug events. Diverse populations, such as African Americans and Latinos, have greater variability in warfarin dose requirements and are at greater risk for experiencing warfarin-related adverse events compared with individuals of European ancestry. Although these data suggest that patients of diverse populations may benefit from improved warfarin dose estimation, the vast majority of literature on genotype-guided warfarin dosing, including data from prospective randomized trials, is in populations of European ancestry. Despite differing frequencies of variants by race/ethnicity, most evidence in diverse populations evaluates variants that are most common in populations of European ancestry. Algorithms that do not include variants important across race/ethnic groups are unlikely to benefit diverse populations. In some race/ethnic groups, development of race-specific or admixture-based algorithms may facilitate improved genotype-guided warfarin dosing algorithms above and beyond that seen in individuals of European ancestry. These observations should be considered in the interpretation of literature evaluating the clinical utility of genotype-guided warfarin dosing. Careful consideration of race/ethnicity and additional evidence focused on improving warfarin dosing algorithms across race/ethnic groups will be necessary for successful clinical implementation of warfarin pharmacogenomics. The evidence for warfarin pharmacogenomics has a broad significance for pharmacogenomic testing, emphasizing the consideration of race/ethnicity in discovery of gene-drug pairs and development of clinical recommendations for pharmacogenetic testing. © 2017 Pharmacotherapy Publications, Inc.
Iyer, Swathi; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel; Fair, Damien
2013-01-01
Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al (2011), and apply a Bayesian approach called the PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054
Evaluation of registration, compression and classification algorithms. Volume 1: Results
NASA Technical Reports Server (NTRS)
Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.
1979-01-01
The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.
Schoenberg, Mike R; Lange, Rael T; Brickell, Tracey A; Saklofske, Donald H
2007-04-01
Neuropsychologic evaluation requires current test performance be contrasted against a comparison standard to determine if change has occurred. An estimate of premorbid intelligence quotient (IQ) is often used as a comparison standard. The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is a commonly used intelligence test. However, there is no method to estimate premorbid IQ for the WISC-IV, limiting the test's utility for neuropsychologic assessment. This study develops algorithms to estimate premorbid Full Scale IQ scores. Participants were the American WISC-IV standardization sample (N = 2172). The sample was randomly divided into 2 groups (development and validation). The development group was used to generate 12 algorithms. These algorithms were accurate predictors of WISC-IV Full Scale IQ scores in healthy children and adolescents. These algorithms hold promise as a method to predict premorbid IQ for patients with known or suspected neurologic dysfunction; however, clinical validation is required.
Content-aware dark image enhancement through channel division.
Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam
2012-09-01
The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Relation between brain architecture and mathematical ability in children: a DBM study.
Han, Zhaoying; Davis, Nicole; Fuchs, Lynn; Anderson, Adam W; Gore, John C; Dawant, Benoit M
2013-12-01
Population-based studies indicate that between 5 and 9 percent of US children exhibit significant deficits in mathematical reasoning, yet little is understood about the brain morphological features related to mathematical performances. In this work, deformation-based morphometry (DBM) analyses have been performed on magnetic resonance images of the brains of 79 third graders to investigate whether there is a correlation between brain morphological features and mathematical proficiency. Group comparison was also performed between Math Difficulties (MD-worst math performers) and Normal Controls (NC), where each subgroup consists of 20 age and gender matched subjects. DBM analysis is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a common space. To evaluate the effect of registration algorithms on DBM results, five nonrigid registration algorithms have been used: (1) the Adaptive Bases Algorithm (ABA); (2) the Image Registration Toolkit (IRTK); (3) the FSL Nonlinear Image Registration Tool; (4) the Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. The deformation field magnitude (DFM) was used to measure the displacement at each voxel, and the Jacobian determinant (JAC) was used to quantify local volumetric changes. Results show there are no statistically significant volumetric differences between the NC and the MD groups using JAC. However, DBM analysis using DFM found statistically significant anatomical variations between the two groups around the left occipital-temporal cortex, left orbital-frontal cortex, and right insular cortex. Regions of agreement between at least two algorithms based on voxel-wise analysis were used to define Regions of Interest (ROIs) to perform an ROI-based correlation analysis on all 79 volumes. Correlations between average DFM values and standard mathematical scores over these regions were found to be significant. We also found that the choice of registration algorithm has an impact on DBM-based results, so we recommend using more than one algorithm when conducting DBM studies. To the best of our knowledge, this is the first study that uses DBM to investigate brain anatomical features related to mathematical performance in a relatively large population of children. © 2013.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
‘Toning’ up hypotonia assessment: A proposal and critique
Joubert, Robin W.E.
2016-01-01
Background Clinical assessment of hypotonia is challenging due to the subjective nature of the initial clinical evaluation. This poses dilemmas for practitioners in gaining accuracy, given that the presentation of hypotonia can be either a non-threatening or malevolent sign. The research question posed was how clinical assessment can be improved, given the current contentions expressed in the scientific literature. Objectives This paper describes the development and critique of a clinical algorithm to aid the assessment of hypotonia. Methods An initial exploratory sequential phase, consisting of a systematic review, a survey amongst clinicians and a Delphi process, assisted in the development of the algorithm, which is presented within the framework of the International Classification of Functioning, Disability and Health. The ensuing critique followed a qualitative emergent–systematic focus group design with a purposive sample of 59 clinicians. Data were analysed using semantical content analysis and are presented thematically with analytical considerations. Results This study culminated in the development of an evidence-based clinical algorithm for practice. The qualitative critique of the algorithm considered aspects such as inadequacies, misconceptions and omissions; strengths; clinical use; resource implications; and recommendations. Conclusions The first prototype and critique of a clinical algorithm to assist the clinical assessment of hypotonia in children has been described. Barriers highlighted include aspects related to knowledge gaps of clinicians, issues around user-friendliness and formatting concerns. Strengths identified by the critique included aspects related to the evidence-based nature of the criteria within the algorithm, the suitability of the algorithm in being merged or extending current practice, the potential of the algorithm in aiding more accurate decision-making, the suitability of the algorithm across age groups and the logical flow. These findings provide a starting point towards ascertaining the clinical utility of the algorithm as an essential step towards evidence-based praxis. PMID:28730054
Optimizing Search and Ranking in Folksonomy Systems by Exploiting Context Information
NASA Astrophysics Data System (ADS)
Abel, Fabian; Henze, Nicola; Krause, Daniel
Tagging systems enable users to annotate resources with freely chosen keywords. The evolving bunch of tag assignments is called folksonomy and there exist already some approaches that exploit folksonomies to improve resource retrieval. In this paper, we analyze and compare graph-based ranking algorithms: FolkRank and SocialPageRank. We enhance these algorithms by exploiting the context of tags, and evaluate the results on the GroupMe! dataset. In GroupMe!, users can organize and maintain arbitrary Web resources in self-defined groups. When users annotate resources in GroupMe!, this can be interpreted in context of a certain group. The grouping activity itself is easy for users to perform. However, it delivers valuable semantic information about resources and their context. We present GRank that uses the context information to improve and optimize the detection of relevant search results, and compare different strategies for ranking result lists in folksonomy systems.
On the Effect of Group Structures on Ranking Strategies in Folksonomies
NASA Astrophysics Data System (ADS)
Abel, Fabian; Henze, Nicola; Krause, Daniel; Kriesell, Matthias
Folksonomies have shown interesting potential for improving information discovery and exploration. Recent folksonomy systems explore the use of tag assignments, which combine Web resources with annotations (tags), and the users that have created the annotations. This article investigates on the effect of grouping resources in folksonomies, i.e. creating sets of resources, and using this additional structure for the tasks of search & ranking, and for tag recommendations. We propose several group-sensitive extensions of graph-based search and recommendation algorithms, and compare them with non group-sensitive versions. Our experiments show that the quality of search result ranking can be significantly improved by introducing and exploiting the grouping of resources (one-tailed t-Test, level of significance α=0.05). Furthermore, tag recommendations profit from the group context, and it is possible to make very good recommendations even for untagged resources- which currently known tag recommendation algorithms cannot fulfill.
Ballenger, James C.; Davidson, Jonathan R. T.; Lecrubier, Yves; Nutt, David J.
2001-04-01
The International Consensus Group on Depression and Anxiety has held 7 meetings over the last 3 years that focused on depression and specific anxiety disorders. During the course of the meeting series, a number of common themes have developed. At the last meeting of the Consensus Group, we reviewed these areas of commonality across the spectrum of depression and anxiety disorders. With the aim of improving the recognition and management of depression and anxiety in the primary care setting, we developed an algorithm that is presented in this article. We attempted to balance currently available scientific knowledge about the treatment of these disorders and to reformat it to provide an acceptable algorithm that meets the practical aspects of recognizing and treating these disorders in primary care.
Discovering loose group movement patterns from animal trajectories
Wang, Yuwei; Luo, Ze; Xiong, Yan; Prosser, Diann J.; Newman, Scott H.; Takekawa, John Y.; Yan, Baoping
2015-01-01
The technical advances of positioning technologies enable us to track animal movements at finer spatial and temporal scales, and further help to discover a variety of complex interactive relationships. In this paper, considering the loose gathering characteristics of the real-life groups' members during the movements, we propose two kinds of loose group movement patterns and corresponding discovery algorithms. Firstly, we propose the weakly consistent group movement pattern which allows the gathering of a part of the members and individual temporary leave from the whole during the movements. To tolerate the high dispersion of the group at some moments (i.e. to adapt the discontinuity of the group's gatherings), we further scheme the weakly consistent and continuous group movement pattern. The extensive experimental analysis and comparison with the real and synthetic data shows that the group pattern discovery algorithms proposed in this paper are similar to the the real-life frequent divergences of the members during the movements, can discover more complete memberships, and have considerable performance.
Ferraldeschi, Michela; Salvetti, Marco; Zaccaria, Andrea; Crisanti, Andrea; Grassi, Francesca
2017-01-01
Background: Multiple sclerosis has an extremely variable natural course. In most patients, disease starts with a relapsing-remitting (RR) phase, which proceeds to a secondary progressive (SP) form. The duration of the RR phase is hard to predict, and to date predictions on the rate of disease progression remain suboptimal. This limits the opportunity to tailor therapy on an individual patient's prognosis, in spite of the choice of several therapeutic options. Approaches to improve clinical decisions, such as collective intelligence of human groups and machine learning algorithms are widely investigated. Methods: Medical students and a machine learning algorithm predicted the course of disease on the basis of randomly chosen clinical records of patients that attended at the Multiple Sclerosis service of Sant'Andrea hospital in Rome. Results: A significant improvement of predictive ability was obtained when predictions were combined with a weight that depends on the consistence of human (or algorithm) forecasts on a given clinical record. Conclusions: In this work we present proof-of-principle that human-machine hybrid predictions yield better prognoses than machine learning algorithms or groups of humans alone. To strengthen this preliminary result, we propose a crowdsourcing initiative to collect prognoses by physicians on an expanded set of patients. PMID:29904574
Tacchella, Andrea; Romano, Silvia; Ferraldeschi, Michela; Salvetti, Marco; Zaccaria, Andrea; Crisanti, Andrea; Grassi, Francesca
2017-01-01
Background: Multiple sclerosis has an extremely variable natural course. In most patients, disease starts with a relapsing-remitting (RR) phase, which proceeds to a secondary progressive (SP) form. The duration of the RR phase is hard to predict, and to date predictions on the rate of disease progression remain suboptimal. This limits the opportunity to tailor therapy on an individual patient's prognosis, in spite of the choice of several therapeutic options. Approaches to improve clinical decisions, such as collective intelligence of human groups and machine learning algorithms are widely investigated. Methods: Medical students and a machine learning algorithm predicted the course of disease on the basis of randomly chosen clinical records of patients that attended at the Multiple Sclerosis service of Sant'Andrea hospital in Rome. Results: A significant improvement of predictive ability was obtained when predictions were combined with a weight that depends on the consistence of human (or algorithm) forecasts on a given clinical record. Conclusions: In this work we present proof-of-principle that human-machine hybrid predictions yield better prognoses than machine learning algorithms or groups of humans alone. To strengthen this preliminary result, we propose a crowdsourcing initiative to collect prognoses by physicians on an expanded set of patients.
Yorio, Jeff; Viswanathan, Sundeep; See, Raphael; Uchal, Linda; McWhorter, Jo Ann; Spencer, Nali; Murphy, Sabina; Khera, Amit; de Lemos, James A; McGuire, Darren K
2008-01-01
The application of disease management algorithms by physician extenders has been shown to improve therapeutic adherence in selected populations. It is unknown whether this strategy would improve adherence to secondary prevention goals after acute coronary syndromes (ACSs) in a largely indigent county hospital setting. Patients admitted for ACS were randomized at the time of discharge to usual follow-up care versus the same care with the addition of a physician extender visit. Physician extender visits were conducted according to a treatment algorithm based on contemporary practice guidelines. Groups were compared using the primary end point of achievement of low-density lipoprotein treatment goals at 3 months after discharge and achievement of additional evidence-based practice goals. One hundred forty consecutive patients were randomized. A similar proportion of patients returned for study follow-up in both groups at 3 months (54 [79%]/68 in the usual care group vs 57 [79%]/72 in the intervention group; P = 0.97). Among those completing the 3-month visit, a low-density lipoprotein cholesterol level less than 100 mg/dL was achieved in 37 (69%) of the usual care patients compared with 35 (57%) of those in the intervention group (P = 0.43). There was no statistical difference in implementation of therapeutic lifestyle changes (smoking cessation, cardiac rehabilitation, or exercise) between groups. Prescription rates of evidence-based therapeutics at 3 months were similar in both groups. The implementation of a post-ACS clinic run by a physician extender applying a disease management algorithm did not measurably improve adherence to evidence-based secondary prevention treatment goals. Despite initially high rates of evidence-based treatment at discharge, adherence with follow-up appointments and sustained implementation of evidence-based therapies remains a significant challenge in this high-risk cohort.
Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita
2016-04-01
Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automated Speech Rate Measurement in Dysarthria.
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-06-01
In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.
Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian
2017-10-16
Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latifi, Kujtim, E-mail: Kujtim.Latifi@Moffitt.org; Oliver, Jasmine; Department of Physics, University of South Florida, Tampa, Florida
Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treatedmore » on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm.« less
Non preemptive soft real time scheduler: High deadline meeting rate on overload
NASA Astrophysics Data System (ADS)
Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed
2015-05-01
While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Redd, Andrew M; Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Tran, Le-Thuy; Samore, Matthew H
2017-07-01
Templates in text notes pose challenges for automated information extraction algorithms. We propose a method that identifies novel templates in plain text medical notes. The identification can then be used to either include or exclude templates when processing notes for information extraction. The two-module method is based on the framework of information foraging and addresses the hypothesis that documents containing templates and the templates within those documents can be identified by common features. The first module takes documents from the corpus and groups those with common templates. This is accomplished through a binned word count hierarchical clustering algorithm. The second module extracts the templates. It uses the groupings and performs a longest common subsequence (LCS) algorithm to obtain the constituent parts of the templates. The method was developed and tested on a random document corpus of 750 notes derived from a large database of US Department of Veterans Affairs (VA) electronic medical notes. The grouping module, using hierarchical clustering, identified 23 groups with 3 documents or more, consisting of 120 documents from the 750 documents in our test corpus. Of these, 18 groups had at least one common template that was present in all documents in the group for a positive predictive value of 78%. The LCS extraction module performed with 100% positive predictive value, 94% sensitivity, and 83% negative predictive value. The human review determined that in 4 groups the template covered the entire document, with the remaining 14 groups containing a common section template. Among documents with templates, the number of templates per document ranged from 1 to 14. The mean and median number of templates per group was 5.9 and 5, respectively. The grouping method was successful in finding like documents containing templates. Of the groups of documents containing templates, the LCS module was successful in deciphering text belonging to the template and text that was extraneous. Major obstacles to improved performance included documents composed of multiple templates, templates that included other templates embedded within them, and variants of templates. We demonstrate proof of concept of the grouping and extraction method of identifying templates in electronic medical records in this pilot study and propose methods to improve performance and scaling up. Published by Elsevier Inc.
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
An Efficient Optimization Method for Solving Unsupervised Data Classification Problems.
Shabanzadeh, Parvaneh; Yusof, Rubiyah
2015-01-01
Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.
Onukwugha, Eberechukwu; Qi, Ran; Jayasekera, Jinani; Zhou, Shujia
2016-02-01
Prognostic classification approaches are commonly used in clinical practice to predict health outcomes. However, there has been limited focus on use of the general approach for predicting costs. We applied a grouping algorithm designed for large-scale data sets and multiple prognostic factors to investigate whether it improves cost prediction among older Medicare beneficiaries diagnosed with prostate cancer. We analysed the linked Surveillance, Epidemiology and End Results (SEER)-Medicare data, which included data from 2000 through 2009 for men diagnosed with incident prostate cancer between 2000 and 2007. We split the survival data into two data sets (D0 and D1) of equal size. We trained the classifier of the Grouping Algorithm for Cancer Data (GACD) on D0 and tested it on D1. The prognostic factors included cancer stage, age, race and performance status proxies. We calculated the average difference between observed D1 costs and predicted D1 costs at 5 years post-diagnosis with and without the GACD. The sample included 110,843 men with prostate cancer. The median age of the sample was 74 years, and 10% were African American. The average difference (mean absolute error [MAE]) per person between the real and predicted total 5-year cost was US$41,525 (MAE US$41,790; 95% confidence interval [CI] US$41,421-42,158) with the GACD and US$43,113 (MAE US$43,639; 95% CI US$43,062-44,217) without the GACD. The 5-year cost prediction without grouping resulted in a sample overestimate of US$79,544,508. The grouping algorithm developed for complex, large-scale data improves the prediction of 5-year costs. The prediction accuracy could be improved by utilization of a richer set of prognostic factors and refinement of categorical specifications.
Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios
2017-07-01
To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
RUSSO, VINCENZO; NIGRO, GERARDO; RAGO, ANNA; ANTONIO PAPA, ANDREA; PROIETTI, RICCARDO; DELLA CIOPPA, NADIA; CRISTIANO, ANNA; PALLADINO, ALBERTO; CALABRÒ, RAFFAELE; POLITANO, LUISA
2013-01-01
The role that atrial pacing therapy plays on the atrial fibrillation (AF) burden is still unclear. Aim of the study was to evaluate the effect of the atrial preference pacing algorithm on AF burden in patients affected by Myotonic Dystrophy type 1 (DM1) followed for a long follow up period. Sixty DM1 patients were -implanted with a dual chamber pacemaker (PM) for first degree or symptomatic type 1/type 2 second degree atrio-ventricular blocks- were followed for 2-years after implantation, by periodical examination. After 1 month of stabilization, they were randomized into two groups: 1) Patients implanted with conventional dual-chamber pacing mode (DDDR group) and 2) Patients implanted with DDDR plus Atrial Preference Pacing (APP) algorithm (APP ON group). The results showed that atrial tachycardia (AT)/AF burden was significantly reduced at 1 year follow up in the APP ON group (2122 ± 428 minutes vs 4127 ± 388 minutes, P = 0.03), with a further reduction at the end of the 2 year follow up period (4652 ± 348 minutes vs 7564 ± 638 minutes, P = 0.005). The data here reported show that the APP is an efficient algorithm to reduce AT/AF burden in DM1 patients implanted with dual chamber pacemaker. PMID:24803841
Webber, Bryant J; Casa, Douglas J; Beutler, Anthony I; Nye, Nathaniel S; Trueblood, Wesley E; O'Connor, Francis G
2016-04-01
Despite aggressive prevention programs and strategies, nontraumatic exertional sudden death events in military training continue to prove a difficult challenge for the Department of Defense. In November 2014, the 559th Medical Group at Joint Base San Antonio-Lackland, Texas, hosted a working group on sudden exertional death in military training. Their objectives were three-fold: (1) determine best practices to prevent sudden exertional death of military trainees, (2) determine best practices to establish safe and ethical training environments for military trainees with sickle cell trait, and (3) develop field-ready algorithms for managing military trainees who collapse during exertion. This article summarizes the major findings and recommendations of the working group. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Mortimer, Duncan; Segal, Leonie
2008-01-01
Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
The FY17Q4 milestone of the ECP/VTK-m project includes the completion of a key-reduce scheduling mechanism, a spatial division algorithm, an algorithm for basic particle advection, and the computation of smoothed surface normals. With the completion of this milestone, we are able to, respectively, more easily group like elements (a common visualization algorithm operation), provide the fundamentals for geometric search structures, provide the fundamentals for many flow visualization algorithms, and provide more realistic rendering of surfaces approximated with facets.
A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R
The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Evaluation of a Delay-Doppler Imaging Algorithm Based on the Wigner-Ville Distribution
1989-10-18
exchanging the frequency and time variables. 2.3 PROPERTIES OF THE WIGNER - VILLE DISTRIBUTION A partial list of the properties of the WVD is provided...ESD-TH-89-163 N Technical Report (N R55 00 Lfl Evaluation of a Delay-Doppler Imaging Algorithm Based on the Wigner - Ville Distribution K.I. Schultz 18...DOPPLER IMAGING ALGORITHM BASED ON THE WIGNER - VILLE DISTRIBUTION K.I. SCHULTZ Group 52 TECHNICAL REPORT 855 18 OCTOBER 1989 Approved for public release
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Research on the precise positioning of customers in large data environment
NASA Astrophysics Data System (ADS)
Zhou, Xu; He, Lili
2018-04-01
Customer positioning has always been a problem that enterprises focus on. In this paper, FCM clustering algorithm is used to cluster customer groups. However, due to the traditional FCM clustering algorithm, which is susceptible to the influence of the initial clustering center and easy to fall into the local optimal problem, the short board of FCM is solved by the gray optimization algorithm (GWO) to achieve efficient and accurate handling of a large number of retailer data.
Modeling Group Interactions via Open Data Sources
2011-08-30
data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ
Wong, Brian J F; Karimi, Koohyar; Devcic, Zlatko; McLaren, Christine E; Chen, Wen-Pin
2008-06-01
The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Basic research study incorporating focus group evaluations. Digital images were acquired of 250 female volunteers (18-25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18-25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cos-metology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (+/-0.73), 5.50 (+/-0.62), 6.23 (+/-0.31), and 6.39 (+/-0.24) for P and F1-F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a similar collection of morphometric measures. No correlation with more commonly accepted measures such as the length facial thirds or fifths were identified. When images are examined as a montage (by generation), clear distinct trends are identified: oval shaped faces, distinct arched eyebrows, and full lips predominate. Faces evolve to approximate the guidelines suggested by classical canons. F3 and F4 generation faces look profoundly similar. The statistical and qualitative analysis indicates that the algorithm and methodology succeeds in generating successively more attractive faces. The use of genetic algorithms in combination with a morphing software and traditional focus-group derived attractiveness scores can be used to evolve attractive synthetic faces. We have demonstrated that the evolution of attractive faces can be mimicked in software. Genetic algorithms and morphing provide a robust alternative to traditional approaches rooted in comparing attractiveness scores with a series of morphometric measurements in human subjects.
Information-based management mode based on value network analysis for livestock enterprises
NASA Astrophysics Data System (ADS)
Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng
2018-01-01
With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.
3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm
NASA Astrophysics Data System (ADS)
Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd
2017-12-01
A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Detection of unmanned aerial vehicles using a visible camera system.
Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C
2017-01-20
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
[New calculation algorithms in brachytherapy for iridium 192 treatments].
Robert, C; Dumas, I; Martinetti, F; Chargari, C; Haie-Meder, C; Lefkopoulos, D
2018-05-18
Since 1995, the brachytherapy dosimetry protocols follow the methodology recommended by the Task Group 43. This methodology, which has the advantage of being fast, is based on several approximations that are not always valid in clinical conditions. Model-based dose calculation algorithms have recently emerged in treatment planning stations and are considered as a major evolution by allowing for consideration of the patient's finite dimensions, tissue heterogeneities and the presence of high atomic number materials in applicators. In 2012, a report from the American Association of Physicists in Medicine Radiation Therapy Task Group 186 reviews these models and makes recommendations for their clinical implementation. This review focuses on the use of model-based dose calculation algorithms in the context of iridium 192 treatments. After a description of these algorithms and their clinical implementation, a summary of the main questions raised by these new methods is performed. Considerations regarding the choice of the medium used for the dose specification and the recommended methodology for assigning materials characteristics are especially described. In the last part, recent concrete examples from the literature illustrate the capabilities of these new algorithms on clinical cases. Copyright © 2018 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Segmenting texts from outdoor images taken by mobile phones using color features
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, D; Nguyen, D; Voronenko, Y
Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less
A distributed algorithm for machine learning
NASA Astrophysics Data System (ADS)
Chen, Shihong
2018-04-01
This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
Ohura, Takehiko; Sanada, Hiromi; Mino, Yoshio
2004-01-01
In recent years, the concept of cost-effectiveness, including medical delivery and health service fee systems, has become widespread in Japanese health care. In the field of pressure ulcer management, the recent introduction of penalty subtraction in the care fee system emphasizes the need for prevention and cost-effective care of pressure ulcer. Previous cost-effectiveness research on pressure ulcer management tended to focus only on "hardware" costs such as those for pharmaceuticals and medical supplies, while neglecting other cost aspects, particularly those involving the cost of labor. Thus, cost-effectiveness in pressure ulcer care has not yet been fully established. To provide true cost effectiveness data, a comparative prospective study was initiated in patients with stage II and III pressure ulcers. Considering the potential impact of the pressure reduction mattress on clinical outcome, in particular, the same type of pressure reduction mattresses are utilized in all the cases in the study. The cost analysis method used was Activity-Based Costing, which measures material and labor cost aspects on a daily basis. A reduction in the Pressure Sore Status Tool (PSST) score was used to measure clinical effectiveness. Patients were divided into three groups based on the treatment method and on the use of a consistent algorithm of wound care: 1. MC/A group, modern dressings with a treatment algorithm (control cohort). 2. TC/A group, traditional care (ointment and gauze) with a treatment algorithm. 3. TC/NA group, traditional care (ointment and gauze) without a treatment algorithm. The results revealed that MC/A is more cost-effective than both TC/A and TC/NA. This suggests that appropriate utilization of modern dressing materials and a pressure ulcer care algorithm would contribute to reducing health care costs, improved clinical results, and, ultimately, greater cost-effectiveness.
Harman, David J; Ryder, Stephen D; James, Martin W; Jelpke, Matthew; Ottey, Dominic S; Wilkes, Emilie A; Card, Timothy R; Aithal, Guruprasad P; Guha, Indra Neil
2015-05-03
To assess the feasibility of a novel diagnostic algorithm targeting patients with risk factors for chronic liver disease in a community setting. Prospective cross-sectional study. Two primary care practices (adult patient population 10,479) in Nottingham, UK. Adult patients (aged 18 years or over) fulfilling one or more selected risk factors for developing chronic liver disease: (1) hazardous alcohol use, (2) type 2 diabetes or (3) persistently elevated alanine aminotransferase (ALT) liver function enzyme with negative serology. A serial biomarker algorithm, using a simple blood-based marker (aspartate aminotransferase:ALT ratio for hazardous alcohol users, BARD score for other risk groups) and subsequently liver stiffness measurement using transient elastography (TE). Diagnosis of clinically significant liver disease (defined as liver stiffness ≥8 kPa); definitive diagnosis of liver cirrhosis. We identified 920 patients with the defined risk factors of whom 504 patients agreed to undergo investigation. A normal blood biomarker was found in 62 patients (12.3%) who required no further investigation. Subsequently, 378 patients agreed to undergo TE, of whom 98 (26.8% of valid scans) had elevated liver stiffness. Importantly, 71/98 (72.4%) patients with elevated liver stiffness had normal liver enzymes and would be missed by traditional investigation algorithms. We identified 11 new patients with definite cirrhosis, representing a 140% increase in the number of diagnosed cases in this population. A non-invasive liver investigation algorithm based in a community setting is feasible to implement. Targeting risk factors using a non-invasive biomarker approach identified a substantial number of patients with previously undetected cirrhosis. The diagnostic algorithm utilised for this study can be found on clinicaltrials.gov (NCT02037867), and is part of a continuing longitudinal cohort study. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang
2014-01-01
Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was <1.0 ng/mL or decreased by 80% versus day 1, with resolution of clinical signs. Primary endpoints were time to discontinuation of intravenous antibiotics for the first episode and adverse events. Historical controls were retrieved for propensity score matching. After matching, 30 patients in the PCT group and 60 in the control were included for analysis. The median duration of antibiotic exposure in PCT group was 3.4 days (interquartile range [IQR] 2.2 days), while 6.1 days (IQR 3.2 days) in control (p < 0.001). The PCT algorithm significantly improves time to antibiotic discontinuation (p < 0.001, log-rank test). The rates of adverse events were comparable between 2 groups. Multivariate-adjusted extended Cox model demonstrated that the PCT-based algorithm was significantly associated with a 87% reduction in hazard of antibiotic exposure within 7 days (hazard ratio [HR] 0.13, 95% CI 0.07-0.21, p < 0.001), and a 68% reduction in hazard after 7 days (adjusted HR 0.32, 95% CI 0.11-0.99, p = 0.047). Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.
Distributed autonomous systems: resource management, planning, and control algorithms
NASA Astrophysics Data System (ADS)
Smith, James F., III; Nguyen, ThanhVu H.
2005-05-01
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
Ranhel, João
2012-06-01
Spiking neurons can realize several computational operations when firing cooperatively. This is a prevalent notion, although the mechanisms are not yet understood. A way by which neural assemblies compute is proposed in this paper. It is shown how neural coalitions represent things (and world states), memorize them, and control their hierarchical relations in order to perform algorithms. It is described how neural groups perform statistic logic functions as they form assemblies. Neural coalitions can reverberate, becoming bistable loops. Such bistable neural assemblies become short- or long-term memories that represent the event that triggers them. In addition, assemblies can branch and dismantle other neural groups generating new events that trigger other coalitions. Hence, such capabilities and the interaction among assemblies allow neural networks to create and control hierarchical cascades of causal activities, giving rise to parallel algorithms. Computing and algorithms are used here as in a nonstandard computation approach. In this sense, neural assembly computing (NAC) can be seen as a new class of spiking neural network machines. NAC can explain the following points: 1) how neuron groups represent things and states; 2) how they retain binary states in memories that do not require any plasticity mechanism; and 3) how branching, disbanding, and interaction among assemblies may result in algorithms and behavioral responses. Simulations were carried out and the results are in agreement with the hypothesis presented. A MATLAB code is available as a supplementary material.
NASA Astrophysics Data System (ADS)
Srivastava, D. C.
2016-12-01
A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observationsDeepak C. Srivastava, Prithvi Thakur and Pravin K. GuptaDepartment of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee 247667, India. Abstract Paleostress estimation from a group of heterogeneous fault-slip observations entails first the classification of the observations into homogeneous fault sets and then a separate inversion of each homogeneous set. This study combines these two issues into a nonlinear inverse problem and proposes a heuristic search method that inverts the heterogeneous fault-slip observations. The method estimates different paleostress states in a group of heterogeneous fault-slip observations and classifies it into homogeneous sets as a byproduct. It uses the genetic algorithm operators, elitism, selection, encoding, crossover and mutation. These processes translate into a guided search that finds successively fitter solutions and operate iteratively until the termination criteria is met and the globally fittest stress tensors are obtained. We explain the basic steps of the algorithm on a working example and demonstrate validity of the method on several synthetic and a natural group of heterogeneous fault-slip observations. The method is independent of any user-defined bias or any entrapment of solution in a local optimum. It succeeds even in the difficult situations where other classification methods are found to fail.
Strategies to improve the efficiency of celiac disease diagnosis in the laboratory.
González, Delia Almeida; de Armas, Laura García; Rodríguez, Itahisa Marcelino; Almeida, Ana Arencibia; García, Miriam García; Gannar, Fadoua; de León, Antonio Cabrera
2017-10-01
The demand for testing to detect celiac disease (CD) autoantibodies has increased, together with the cost per case diagnosed, resulting in the adoption of measures to restrict laboratory testing. We designed this study to determine whether opportunistic screening to detect CD-associated autoantibodies had advantages compared to efforts to restrict testing, and to identify the most cost-effective diagnostic strategy. We compared a group of 1678 patients in which autoantibody testing was restricted to cases in which the test referral was considered appropriate (G1) to a group of 2140 patients in which test referrals were not reviewed or restricted (G2). Two algorithms A (quantifying IgA and Tissue transglutaminase IgA [TG-IgA] in all patients), and B (quantifying only TG-IgA in all patients) were used in each group, and the cost-effectiveness of each strategy was calculated. TG-IgA autoantibodies were positive in 62 G1 patients and 69 G2 patients. Among those positive for tissue transglutaminase IgA and endomysial IgA autoantibodies, the proportion of patients with de novo autoantibodies was lower (p=0.028) in G1 (11/62) than in G2 (24/69). Algorithm B required fewer determinations than algorithm A in both G1 (2310 vs 3493; p<0.001) and G2 (2196 vs 4435; p<0.001). With algorithm B the proportion of patients in whom IgA was tested was lower (p<0.001) in G2 (29/2140) than in G1 (617/1678). The lowest cost per case diagnosed (4.63 euros/patient) was found with algorithm B in G2. We conclude that opportunistic screening has advantages compared to efforts in the laboratory to restrict CD diagnostic testing. The most cost-effective strategy was based on the use of an appropriate algorithm. Copyright © 2017. Published by Elsevier B.V.
Application of neural networks to group technology
NASA Astrophysics Data System (ADS)
Caudell, Thomas P.; Smith, Scott D. G.; Johnson, G. C.; Wunsch, Donald C., II
1991-08-01
Adaptive resonance theory (ART) neural networks are being developed for application to the industrial engineering problem of group technology--the reuse of engineering designs. Two- and three-dimensional representations of engineering designs are input to ART-1 neural networks to produce groups or families of similar parts. These representations, in their basic form, amount to bit maps of the part, and can become very large when the part is represented in high resolution. This paper describes an enhancement to an algorithmic form of ART-1 that allows it to operate directly on compressed input representations and to generate compressed memory templates. The performance of this compressed algorithm is compared to that of the regular algorithm on real engineering designs and a significant savings in memory storage as well as a speed up in execution is observed. In additions, a `neural database'' system under development is described. This system demonstrates the feasibility of training an ART-1 network to first cluster designs into families, and then to recall the family when presented a similar design. This application is of large practical value to industry, making it possible to avoid duplication of design efforts.
2017-01-01
Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing—with its unique statistical properties—became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca. PMID:28817636
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Ramachandran, Parameswaran; Sánchez-Taltavull, Daniel; Perkins, Theodore J
2017-01-01
Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing-with its unique statistical properties-became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Astrophysics Data System (ADS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-05-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Technical Reports Server (NTRS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-01-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields
NASA Astrophysics Data System (ADS)
Milstead, Jonathan
The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.
Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei
2016-01-01
In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms. The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.
Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya
2016-03-01
In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification. © 2015, The International Biometric Society.
Current Treatment Algorithms for Patients with Metastatic Non-Small Cell, Non-Squamous Lung Cancer
Melosky, Barbara
2017-01-01
The treatment paradigm for metastatic non-small cell, non-squamous lung cancer is continuously evolving due to new treatment options and our increasing knowledge of molecular signal pathways. As a result of treatments becoming more efficacious and more personalized, survival for selected groups of non-small cell lung cancer (NSCLC) patients is increasing. In this paper, three algorithms will be presented for treating patients with metastatic non-squamous, NSCLC. These include treatment algorithms for NSCLC patients whose tumors have EGFR mutations, ALK rearrangements, or wild-type/wild-type tumors. As the world of immunotherapy continues to evolve quickly, a future algorithm will also be presented. PMID:28373963
Ko, Rachel Jia Min; Lim, Swee Han; Wu, Vivien Xi; Leong, Tak Yam; Liaw, Sok Ying
2018-01-01
INTRODUCTION Simplifying the learning of cardiopulmonary resuscitation (CPR) is advocated to improve skill acquisition and retention. A simplified CPR training programme focusing on continuous chest compression, with a simple landmark tracing technique, was introduced to laypeople. The study aimed to examine the effectiveness of the simplified CPR training in improving lay rescuers’ CPR performance as compared to standard CPR. METHODS A total of 85 laypeople (aged 21–60 years) were recruited and randomly assigned to undertake either a two-hour simplified or standard CPR training session. They were tested two months after the training on a simulated cardiac arrest scenario. Participants’ performance on the sequence of CPR steps was observed and evaluated using a validated CPR algorithm checklist. The quality of chest compression and ventilation was assessed from the recording manikins. RESULTS The simplified CPR group performed significantly better on the CPR algorithm when compared to the standard CPR group (p < 0.01). No significant difference was found between the groups in time taken to initiate CPR. However, a significantly higher number of compressions and proportion of adequate compressions was demonstrated by the simplified group than the standard group (p < 0.01). Hands-off time was significantly shorter in the simplified CPR group than in the standard CPR group (p < 0.001). CONCLUSION Simplifying the learning of CPR by focusing on continuous chest compressions, with simple hand placement for chest compression, could lead to better acquisition and retention of CPR algorithms, and better quality of chest compressions than standard CPR. PMID:29167910
Ko, Rachel Jia Min; Lim, Swee Han; Wu, Vivien Xi; Leong, Tak Yam; Liaw, Sok Ying
2018-04-01
Simplifying the learning of cardiopulmonary resuscitation (CPR) is advocated to improve skill acquisition and retention. A simplified CPR training programme focusing on continuous chest compression, with a simple landmark tracing technique, was introduced to laypeople. The study aimed to examine the effectiveness of the simplified CPR training in improving lay rescuers' CPR performance as compared to standard CPR. A total of 85 laypeople (aged 21-60 years) were recruited and randomly assigned to undertake either a two-hour simplified or standard CPR training session. They were tested two months after the training on a simulated cardiac arrest scenario. Participants' performance on the sequence of CPR steps was observed and evaluated using a validated CPR algorithm checklist. The quality of chest compression and ventilation was assessed from the recording manikins. The simplified CPR group performed significantly better on the CPR algorithm when compared to the standard CPR group (p < 0.01). No significant difference was found between the groups in time taken to initiate CPR. However, a significantly higher number of compressions and proportion of adequate compressions was demonstrated by the simplified group than the standard group (p < 0.01). Hands-off time was significantly shorter in the simplified CPR group than in the standard CPR group (p < 0.001). Simplifying the learning of CPR by focusing on continuous chest compressions, with simple hand placement for chest compression, could lead to better acquisition and retention of CPR algorithms, and better quality of chest compressions than standard CPR. Copyright: © Singapore Medical Association.
Milewski, Marek C; Kamel, Karol; Kurzynska-Kokorniak, Anna; Chmielewski, Marcin K; Figlerowicz, Marek
2017-10-01
Experimental methods based on DNA and RNA hybridization, such as multiplex polymerase chain reaction, multiplex ligation-dependent probe amplification, or microarray analysis, require the use of mixtures of multiple oligonucleotides (primers or probes) in a single test tube. To provide an optimal reaction environment, minimal self- and cross-hybridization must be achieved among these oligonucleotides. To address this problem, we developed EvOligo, which is a software package that provides the means to design and group DNA and RNA molecules with defined lengths. EvOligo combines two modules. The first module performs oligonucleotide design, and the second module performs oligonucleotide grouping. The software applies a nearest-neighbor model of nucleic acid interactions coupled with a parallel evolutionary algorithm to construct individual oligonucleotides, and to group the molecules that are characterized by the weakest possible cross-interactions. To provide optimal solutions, the evolutionary algorithm sorts oligonucleotides into sets, preserves preselected parts of the oligonucleotides, and shapes their remaining parts. In addition, the oligonucleotide sets can be designed and grouped based on their melting temperatures. For the user's convenience, EvOligo is provided with a user-friendly graphical interface. EvOligo was used to design individual oligonucleotides, oligonucleotide pairs, and groups of oligonucleotide pairs that are characterized by the following parameters: (1) weaker cross-interactions between the non-complementary oligonucleotides and (2) more uniform ranges of the oligonucleotide pair melting temperatures than other available software products. In addition, in contrast to other grouping algorithms, EvOligo offers time-efficient sorting of paired and unpaired oligonucleotides based on various parameters defined by the user.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Development of an algorithm to identify fall-related injuries and costs in Medicare data.
Kim, Sung-Bou; Zingmond, David S; Keeler, Emmett B; Jennings, Lee A; Wenger, Neil S; Reuben, David B; Ganz, David A
2016-12-01
Identifying fall-related injuries and costs using healthcare claims data is cost-effective and easier to implement than using medical records or patient self-report to track falls. We developed a comprehensive four-step algorithm for identifying episodes of care for fall-related injuries and associated costs, using fee-for-service Medicare and Medicare Advantage health plan claims data for 2,011 patients from 5 medical groups between 2005 and 2009. First, as a preparatory step, we identified care received in acute inpatient and skilled nursing facility settings, in addition to emergency department visits. Second, based on diagnosis and procedure codes, we identified all fall-related claim records. Third, with these records, we identified six types of encounters for fall-related injuries, with different levels of injury and care. In the final step, we used these encounters to identify episodes of care for fall-related injuries. To illustrate the algorithm, we present a representative example of a fall episode and examine descriptive statistics of injuries and costs for such episodes. Altogether, we found that the results support the use of our algorithm for identifying episodes of care for fall-related injuries. When we decomposed an episode, we found that the details present a realistic and coherent story of fall-related injuries and healthcare services. Variation of episode characteristics across medical groups supported the use of a complex algorithm approach, and descriptive statistics on the proportion, duration, and cost of episodes by healthcare services and injuries verified that our results are consistent with other studies. This algorithm can be used to identify and analyze various types of fall-related outcomes including episodes of care, injuries, and associated costs. Furthermore, the algorithm can be applied and adopted in other fall-related studies with relative ease.
Reznitsky, P A; Yartsev, P A; Shavrina, N V
To assess an effectiveness of minimally invasive and laparoscopic technologies in treatment of inflammatory complications of colic diverticular disease. The study included 150 patients who were divided into control and main groups. Survey included ultrasound, X-ray examination and abdominal computerized tomography. In the main group standardized treatment algorithm including minimally invasive and laparoscopic technologies was used. In the main group 79 patients underwent conservative treatment, minimally invasive (ultrasound-assisted percutaneous drainage of abscesses) and laparoscopic surgery that was successful in 78 (98.7%) patients. Standardized algorithm reduces time of treatment, incidence of postoperative complications, mortality and the risk of recurrent inflammatory complications of colic diverticular disease. Also postoperative quality of life was improved.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
Multigrid Methods for the Computation of Propagators in Gauge Fields
NASA Astrophysics Data System (ADS)
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
Detecting perceptual groupings in textures by continuity considerations
NASA Technical Reports Server (NTRS)
Greene, Richard J.
1990-01-01
A generalization is presented for the second derivative of a Gaussian D(sup 2)G operator to apply to problems of perceptual organization involving textures. Extensions to other problems of perceptual organization are evident and a new research direction can be established. The technique presented is theoretically pleasing since it has the potential of unifying the entire area of image segmentation under the mathematical notion of continuity and presents a single algorithm to form perceptual groupings where many algorithms existed previously. The eventual impact on both the approach and technique of image processing segmentation operations could be significant.
Theory of electron transfer and molecular state in DNA
NASA Astrophysics Data System (ADS)
Endres, Robert Gunter
2002-09-01
In this thesis, a mechanism for long-range electron transfer in DNA and a systematic search for high conductance DNA are developed. DNA is well known for containing the genetic code of all living species. On the other hand, there are some experimental indications that DNA can mediate effectively long-range electron transfer leading to the concept of chemistry at a distance. This can be important for DNA damage and healing. In the first part of the thesis, a possible mechanism for long-range electron transfer is introduced. The weak distance dependent electron transfer was experimentally observed using transition metal intercalators for donor and acceptor. In our model calculations, the transfer is mediated by the molecular analogue of a Kondo bound state well known from solid state physics of mixed-valence rare-earth compounds. We believe this is quite realistic, since localized d orbitals of the transition metal ions could function as an Anderson impurity embedded in a reservoir of rather delocalized molecular orbitals of the intercalator ligands and DNA pi orbitals. The effective Anderson model is solved with a physically intuitive variational ansatz as well as with the essentially exact DMRG method. The electronic transition matrix element, which is important because it contains the donor-acceptor distance dependence, is obtained with the Mulliken-Hush algorithm as well as from Born-Oppenheimer potential energy surfaces. Our possible explanation of long-range electron transfer is put in context to other more conventional mechanisms which also could lead to similar behavior. Another important issue of DNA is its possible use for nano-technology. Although DNA's mechanical properties are excellent, the question whether it can be conducting and be used for nano-wires is highly controversial. Experimentally, DNA shows conducting, semi-conducting and insulating properties. Motivated by these wide ranging experimental results on the conductivity of DNA, we have embarked on a theoretical effort to ascertain what conditions might induce such remarkable behavior. We use a combination of an ab initio density functional theory method and a parameterized Huckel-Slater-Koster model. Our focus here is to examine whether any likely DNA structures or environments can yield reduced activation gaps to conduction or enhanced electronic overlaps. In particular, we study a hypothetical stretched ribbon structure, A-, and B-form DNA, and the effects of counterions and humidity. Unlike solids, DNA and other molecules are considered soft condensed matter. Hence, we study the influence of vibrations upon the electronic structure of DNA. We calculate parameters for charge transfer rates between adjacent bases. We find good agreement between our estimated rates and recent experimental data assuming that torsional vibrations limit the charge transfer most significantly.
Epidermis area detection for immunofluorescence microscopy
NASA Astrophysics Data System (ADS)
Dovganich, Andrey; Krylov, Andrey; Nasonov, Andrey; Makhneva, Natalia
2018-04-01
We propose a novel image segmentation method for immunofluorescence microscopy images of skin tissue for the diagnosis of various skin diseases. The segmentation is based on machine learning algorithms. The feature vector is filled by three groups of features: statistical features, Laws' texture energy measures and local binary patterns. The images are preprocessed for better learning. Different machine learning algorithms have been used and the best results have been obtained with random forest algorithm. We use the proposed method to detect the epidermis region as a part of pemphigus diagnosis system.
ERIC Educational Resources Information Center
Flores, Raymond; Koontz, Esther; Inan, Fethi A.; Alagic, Mara
2015-01-01
This study examined the impact of the order of two teaching approaches on students' abilities and on-task behaviors while learning how to solve percentage problems. Two treatment groups were compared. MR first received multiple representation instruction followed by traditional algorithmic instruction and TA first received these teaching…
Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure
Park, Wookje; Jung, Sikhang
2014-01-01
Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508
NASA Astrophysics Data System (ADS)
Wang, Yue; Yu, Jingjun; Pei, Xu
2018-06-01
A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
The search for extended infrared emission near interacting and active galaxies
NASA Technical Reports Server (NTRS)
Appleton, Philip N.
1991-01-01
The following subject areas are covered: the search for extended far IR emission; the search for extended emission in galaxy groups; a brief review of the flattening algorithm; the target groups; extended emission from groups and intergalactic HI clouds; and morphological image processing.
Astrocytic tracer dynamics estimated from [1-¹¹C]-acetate PET measurements.
Arnold, Andrea; Calvetti, Daniela; Gjedde, Albert; Iversen, Peter; Somersalo, Erkki
2015-12-01
We address the problem of estimating the unknown parameters of a model of tracer kinetics from sequences of positron emission tomography (PET) scan data using a statistical sequential algorithm for the inference of magnitudes of dynamic parameters. The method, based on Bayesian statistical inference, is a modification of a recently proposed particle filtering and sequential Monte Carlo algorithm, where instead of preassigning the accuracy in the propagation of each particle, we fix the time step and account for the numerical errors in the innovation term. We apply the algorithm to PET images of [1-¹¹C]-acetate-derived tracer accumulation, estimating the transport rates in a three-compartment model of astrocytic uptake and metabolism of the tracer for a cohort of 18 volunteers from 3 groups, corresponding to healthy control individuals, cirrhotic liver and hepatic encephalopathy patients. The distribution of the parameters for the individuals and for the groups presented within the Bayesian framework support the hypothesis that the parameters for the hepatic encephalopathy group follow a significantly different distribution than the other two groups. The biological implications of the findings are also discussed. © The Authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Baygin, Mehmet; Karakose, Mehmet
2013-01-01
Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433
[Clinical applications of dosing algorithm in the predication of warfarin maintenance dose].
Huang, Sheng-wen; Xiang, Dao-kang; An, Bang-quan; Li, Gui-fang; Huang, Ling; Wu, Hai-li
2011-12-27
To evaluate the feasibility of clinical application for genetic based dosing algorithm in the predication of warfarin maintenance dose in Chinese population. The clinical data were collected and blood samples harvested from a total of 126 patients undergoing heart valve replacement. The genotypes of VKORC1 and CYP2C9 were determined by melting curve analysis after PCR. They were divided randomly into the study and control groups. In the study group, the first three doses of warfarin were prescribed according to the predicted warfarin maintenance dose while warfarin was initiated at 2.5 mg/d in the control group. The warfarin doses were adjusted according to the measured international normalized ratio (INR) values. And all subjects were followed for 50 days after an initiation of warfarin therapy. At the end of a 50-day follow-up period, the proportions of the patients on a stable dose were 82.4% (42/51) and 62.5% (30/48) for the study and control groups respectively. The mean durations of reaching a stable dose of warfarin were (27.5 ± 1.8) and (34.7 ± 1.8) days and the median durations were (24.0 ± 1.7) and (33.0 ± 4.5) days in the study and control groups respectively. Significant differences existed in the durations of reaching a stable dose between the two groups (P = 0.012). Compared with the control group, the hazard ratio (HR) for the duration of reaching a stable dose was 1.786 in the study group (95%CI 1.088 - 2.875, P = 0.026). The predicted dosing algorithm incorporating genetic and non-genetic factors may shorten the duration of achieving efficiently a stable dose of warfarin. And the present study validates the feasibility of its clinical application.
NASA Astrophysics Data System (ADS)
Chia, Teck Chee; Fu, Sheng; Chia, Yee Hong; Kwek, Leong Chuan; Tang, Choong Leong
2005-09-01
This study aimed at applying Laser induced-autofluorescence (LIAF) diagnostics method as an in-vivo screening of colorectal polyplcancer. The spectrum algorithm based on the ratio of autofluorescence intensity was used to identify the diseased tissues from the normal tissues as it was generally performed better than an algorithm based only simply on the intensity of the spectrum. Histopathological biopsy results were compared with the detected AF spectra characteristics for different kinds of polyps. 73 patients had been examined via the LIAF spectroscopy detection system during their colonoscopy screening in Endoscopy Center, Singapore General Hospital. The autofluorescence from the surface of the colorectal tissues under 405 nm laser light excitation was detected using our detecting system. In the experimental investigation two groups of patients were involved. One group was "abnormal" group. There were 25 patients belonging to this group since polyps or carcinoma was found in their colorectal tract during colonoscopy. The histopathology reports confirm the group classification. Total 36 polyps' AF spectra and 9 carcinoma' AF spectra were detected from 25 patients of the abnormal group during their regular endoscopy examination. The intensity ratios RI-680/I-500 and RI-630/I-500 of polyps/cancerous AF spectra and intensity ratios of corresponding normal colorectal AF spectra were calculated. Two critical intensity ratios for separating the AF intensity ratios RI-680/I-500 and RI-630/I-500 of normal and abnormal colorectal tissues were defined as 0.5 and 0.6 respectively. Using the critical intensity ratio values, 48 "normal" group patients' rectums were checked via the LIAF detection system. There were 20 patients (41.7%) whose AF spectra of colorectal tract mucosa belonging to abnormal spectra. However, these 20 patients had not been found under white light via traditional endoscopy. For small diseased area like small plat polyp disease and carcinoma, it was very difficult to identify under white light by endoscopy. However, the LIAF spectra technique and AF intensity ratio algorithm was able to detect these kinds of abnormal area earlier than traditional endoscopy. Using this algorithm, it is able to identify the onset of abnormal tissue growth during real-time clinical endoscope examination.
Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge
Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant
2014-01-01
Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598
A coarse to fine minutiae-based latent palmprint matching.
Liu, Eryun; Jain, Anil K; Tian, Jie
2013-10-01
With the availability of live-scan palmprint technology, high resolution palmprint recognition has started to receive significant attention in forensics and law enforcement. In forensic applications, latent palmprints provide critical evidence as it is estimated that about 30 percent of the latents recovered at crime scenes are those of palms. Most of the available high-resolution palmprint matching algorithms essentially follow the minutiae-based fingerprint matching strategy. Considering the large number of minutiae (about 1,000 minutiae in a full palmprint compared to about 100 minutiae in a rolled fingerprint) and large area of foreground region in full palmprints, novel strategies need to be developed for efficient and robust latent palmprint matching. In this paper, a coarse to fine matching strategy based on minutiae clustering and minutiae match propagation is designed specifically for palmprint matching. To deal with the large number of minutiae, a local feature-based minutiae clustering algorithm is designed to cluster minutiae into several groups such that minutiae belonging to the same group have similar local characteristics. The coarse matching is then performed within each cluster to establish initial minutiae correspondences between two palmprints. Starting with each initial correspondence, a minutiae match propagation algorithm searches for mated minutiae in the full palmprint. The proposed palmprint matching algorithm has been evaluated on a latent-to-full palmprint database consisting of 446 latents and 12,489 background full prints. The matching results show a rank-1 identification accuracy of 79.4 percent, which is significantly higher than the 60.8 percent identification accuracy of a state-of-the-art latent palmprint matching algorithm on the same latent database. The average computation time of our algorithm for a single latent-to-full match is about 141 ms for genuine match and 50 ms for impostor match, on a Windows XP desktop system with 2.2-GHz CPU and 1.00-GB RAM. The computation time of our algorithm is an order of magnitude faster than a previously published state-of-the-art-algorithm.
Emanuele, Vincent A; Panicker, Gitika; Gurbaxani, Brian M; Lin, Jin-Mann S; Unger, Elizabeth R
2012-01-01
SELDI-TOF mass spectrometer's compact size and automated, high throughput design have been attractive to clinical researchers, and the platform has seen steady-use in biomarker studies. Despite new algorithms and preprocessing pipelines that have been developed to address reproducibility issues, visual inspection of the results of SELDI spectra preprocessing by the best algorithms still shows miscalled peaks and systematic sources of error. This suggests that there continues to be problems with SELDI preprocessing. In this work, we study the preprocessing of SELDI in detail and introduce improvements. While many algorithms, including the vendor supplied software, can identify peak clusters of specific mass (or m/z) in groups of spectra with high specificity and low false discover rate (FDR), the algorithms tend to underperform estimating the exact prevalence and intensity of peaks in those clusters. Thus group differences that at first appear very strong are shown, after careful and laborious hand inspection of the spectra, to be less than significant. Here we introduce a wavelet/neural network based algorithm which mimics what a team of expert, human users would call for peaks in each of several hundred spectra in a typical SELDI clinical study. The wavelet denoising part of the algorithm optimally smoothes the signal in each spectrum according to an improved suite of signal processing algorithms previously reported (the LibSELDI toolbox under development). The neural network part of the algorithm combines those results with the raw signal and a training dataset of expertly called peaks, to call peaks in a test set of spectra with approximately 95% accuracy. The new method was applied to data collected from a study of cervical mucus for the early detection of cervical cancer in HPV infected women. The method shows promise in addressing the ongoing SELDI reproducibility issues.
Wong, Brian J. F.; Karmi, Koohyar; Devcic, Zlatko; McLaren, Christine E.; Chen, Wen-Pin
2013-01-01
Objectives The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Study Design Basic research study incorporating focus group evaluations. Methods Digital images were acquired of 250 female volunteers (18–25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18–25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cosmetology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. Results The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (±0.73), 5.50 (±0.62), 6.23 (±0.31), and 6.39 (±0.24) for P and F1–F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a similar collection of morphometric measures. No correlation with more commonly accepted measures such as the length facial thirds or fifths were identified. When images are examined as a montage (by generation), clear distinct trends are identified: oval shaped faces, distinct arched eyebrows, and full lips predominate. Faces evolve to approximate the guidelines suggested by classical canon. F3 and F4 generation faces look profoundly similar. The statistical and qualitative analysis indicates that the algorithm and methodology succeeds in generating successively more attractive faces. Conclusions The use of genetic algorithms in combination with a morphing software and traditional focus-group derived attractiveness scores can be used to evolve attractive synthetic faces. We have demonstrated that the evolution of attractive faces can be mimicked in software. Genetic algorithms and morphing provide a robust alternative to traditional approaches rooted in comparing attractiveness scores with a series of morphometric measurements in human subjects. PMID:18401273
Probabilistic Common Spatial Patterns for Multichannel EEG Analysis
Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai
2015-01-01
Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228
Symmetric log-domain diffeomorphic Registration: a demons-based approach.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2008-01-01
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
A semi-supervised classification algorithm using the TAD-derived background as training data
NASA Astrophysics Data System (ADS)
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
ERIC Educational Resources Information Center
Muuro, Maina Elizaphan; Oboko, Robert; Wagacha, Waiganjo Peter
2016-01-01
In this paper we explore the impact of an intelligent grouping algorithm based on learners' collaborative competency when compared with (a) instructor based Grade Point Average (GPA) method level and (b) random method, on group outcomes and group collaboration problems in an online collaborative learning environment. An intelligent grouping…
Visual reconciliation of alternative similarity spaces in climate modeling
J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva
2015-01-01
Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...
Hwang, I-Shyan
2017-01-01
The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078
A fast parallel clustering algorithm for molecular simulation trajectories.
Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui
2013-01-15
We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.
Algorithmic Mechanism Design of Evolutionary Computation
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela
2007-05-01
Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N
2018-02-01
Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.
Validation of two algorithms for managing children with a non-blanching rash.
Riordan, F Andrew I; Jones, Laura; Clark, Julia
2016-08-01
Paediatricians are concerned that children who present with a non-blanching rash (NBR) may have meningococcal disease (MCD). Two algorithms have been devised to help identify which children with an NBR have MCD. To evaluate the NBR algorithms' ability to identify children with MCD. The Newcastle-Birmingham-Liverpool (NBL) algorithm was applied retrospectively to three cohorts of children who had presented with NBRs. This algorithm was also piloted in four hospitals, and then used prospectively for 12 months in one hospital. The National Institute for Health and Care Excellence (NICE) algorithm was validated retrospectively using data from all cohorts. The cohorts included 625 children, 145 (23%) of whom had confirmed or probable MCD. Paediatricians empirically treated 324 (52%) children with antibiotics. The NBL algorithm identified all children with MCD and suggested treatment for a further 86 children (sensitivity 100%, specificity 82%). One child with MCD did not receive immediate antibiotic treatment, despite this being suggested by the algorithm. The NICE algorithm suggested 382 children (61%) who should be treated with antibiotics. This included 141 of the 145 children with MCD (sensitivity 97%, specificity 50%). These algorithms may help paediatricians identify children with MCD who present with NBRs. The NBL algorithm may be more specific than the NICE algorithm as it includes fewer features suggesting MCD. The only significant delay in treatment of MCD occurred when the algorithms were not followed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Remainder Wheels and Group Theory
ERIC Educational Resources Information Center
Brenton, Lawrence
2008-01-01
Why should prospective elementary and high school teachers study group theory in college? This paper examines applications of abstract algebra to the familiar algorithm for converting fractions to repeating decimals, revealing ideas of surprising substance beneath an innocent facade.
Waldman, John R.; Fabrizio, Mary C.
1994-01-01
Stock contribution studies of mixed-stock fisheries rely on the application of classification algorithms to samples of unknown origin. Although the performance of these algorithms can be assessed, there are no guidelines regarding decisions about including minor stocks, pooling stocks into regional groups, or sampling discrete substocks to adequately characterize a stock. We examined these questions for striped bass Morone saxatilis of the U.S. Atlantic coast by applying linear discriminant functions to meristic and morphometric data from fish collected from spawning areas. Some of our samples were from the Hudson and Roanoke rivers and four tributaries of the Chesapeake Bay. We also collected fish of mixed-stock origin from the Atlantic Ocean near Montauk, New York. Inclusion of the minor stock from the Roanoke River in the classification algorithm decreased the correct-classification rate, whereas grouping of the Roanoke River and Chesapeake Bay stock into a regional (''southern'') group increased the overall resolution. The increased resolution was offset by our inability to obtain separate contribution estimates of the groups that were pooled. Although multivariate analysis of variance indicated significant differences among Chesapeake Bay substocks, increasing the number of substocks in the discriminant analysis decreased the overall correct-classification rate. Although the inclusion of one, two, three, or four substocks in the classification algorithm did not greatly affect the overall correct-classification rates, the specific combination of substocks significantly affected the relative contribution estimates derived from the mixed-stock sample. Future studies of this kind must balance the costs and benefits of including minor stocks and would profit from examination of the variation in discriminant characters among all Chesapeake Bay substocks.
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
NASA Astrophysics Data System (ADS)
Healy, John J.
2018-01-01
The linear canonical transforms (LCTs) are a parameterised group of linear integral transforms. The LCTs encompass a number of well-known transformations as special cases, including the Fourier transform, fractional Fourier transform, and the Fresnel integral. They relate the scalar wave fields at the input and output of systems composed of thin lenses and free space, along with other quadratic phase systems. In this paper, we perform a systematic search of all algorithms based on up to five stages of magnification, chirp multiplication and Fourier transforms. Based on that search, we propose a novel algorithm, for which we present numerical results. We compare the sampling requirements of three algorithms. Finally, we discuss some issues surrounding the composition of discrete LCTs.
A 1.375-approximation algorithm for sorting by transpositions.
Elias, Isaac; Hartman, Tzvika
2006-01-01
Sorting permutations by transpositions is an important problem in genome rearrangements. A transposition is a rearrangement operation in which a segment is cut out of the permutation and pasted in a different location. The complexity of this problem is still open and it has been a 10-year-old open problem to improve the best known 1.5-approximation algorithm. In this paper, we provide a 1.375-approximation algorithm for sorting by transpositions. The algorithm is based on a new upper bound on the diameter of 3-permutations. In addition, we present some new results regarding the transposition diameter: we improve the lower bound for the transposition diameter of the symmetric group and determine the exact transposition diameter of simple permutations.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel
2011-09-01
In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research Monograph 167: The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. NTIS PB 97175889. GPO 017-024-01607-1. Bethesda, MD: National Institutes of Health, 1997).
Bio-ALIRT biosurveillance detection algorithm evaluation.
Siegrist, David; Pavlin, J
2004-09-24
Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal illness outbreaks. Further research is needed to assess the value of electronic data sources for predictive detection. In addition, simulations need to be developed and implemented to better characterize the size and type of biologic attack that can be detected by current methods by challenging them under different projected operational conditions.
Recent advances in lossless coding techniques
NASA Astrophysics Data System (ADS)
Yovanof, Gregory S.
Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.
ERIC Educational Resources Information Center
Wiles, Clyde
Two questions were investigated in this study: (1) How did the computational proficiency of sixth graders who had one year's experience with Developing Mathematical Processes (DMP) materials compare with an equivalent group of students who used the usual textbook program; and (2) What occurs when sixth graders study algorithms as sequences of rule…
Age group classification and gender detection based on forced expiratory spirometry.
Cosgun, Sema; Ozbek, I Yucel
2015-08-01
This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.
Processing Images of Craters for Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.
2009-01-01
A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.
Automatic microseismic event picking via unsupervised machine learning
NASA Astrophysics Data System (ADS)
Chen, Yangkang
2018-01-01
Effective and efficient arrival picking plays an important role in microseismic and earthquake data processing and imaging. Widely used short-term-average long-term-average ratio (STA/LTA) based arrival picking algorithms suffer from the sensitivity to moderate-to-strong random ambient noise. To make the state-of-the-art arrival picking approaches effective, microseismic data need to be first pre-processed, for example, removing sufficient amount of noise, and second analysed by arrival pickers. To conquer the noise issue in arrival picking for weak microseismic or earthquake event, I leverage the machine learning techniques to help recognizing seismic waveforms in microseismic or earthquake data. Because of the dependency of supervised machine learning algorithm on large volume of well-designed training data, I utilize an unsupervised machine learning algorithm to help cluster the time samples into two groups, that is, waveform points and non-waveform points. The fuzzy clustering algorithm has been demonstrated to be effective for such purpose. A group of synthetic, real microseismic and earthquake data sets with different levels of complexity show that the proposed method is much more robust than the state-of-the-art STA/LTA method in picking microseismic events, even in the case of moderately strong background noise.
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
NASA Astrophysics Data System (ADS)
Avery, Patrick; Zurek, Eva
2017-04-01
A new algorithm, RANDSPG, that can be used to generate trial crystal structures with specific space groups and compositions is described. The program has been designed for systems where the atoms are independent of one another, and it is therefore primarily suited towards inorganic systems. The structures that are generated adhere to user-defined constraints such as: the lattice shape and size, stoichiometry, set of space groups to be generated, and factors that influence the minimum interatomic separations. In addition, the user can optionally specify if the most general Wyckoff position is to be occupied or constrain select atoms to specific Wyckoff positions. Extensive testing indicates that the algorithm is efficient and reliable. The library is lightweight, portable, dependency-free and is published under a license recognized by the Open Source Initiative. A web interface for the algorithm is publicly accessible at http://xtalopt.openmolecules.net/randSpg/randSpg.html. RANDSPG has also been interfaced with the XTALOPT evolutionary algorithm for crystal structure prediction, and it is illustrated that the use of symmetric lattices in the first generation of randomly created individuals decreases the number of structures that need to be optimized to find the global energy minimum.
Annotating image ROIs with text descriptions for multimodal biomedical document retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-01-01
Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.
NASA Astrophysics Data System (ADS)
Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui
2017-07-01
Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.
A genetic-based algorithm for personalized resistance training
Kiely, J; Suraci, B; Collins, DJ; de Lorenzo, D; Pickering, C; Grimaldi, KA
2016-01-01
Association studies have identified dozens of genetic variants linked to training responses and sport-related traits. However, no intervention studies utilizing the idea of personalised training based on athlete's genetic profile have been conducted. Here we propose an algorithm that allows achieving greater results in response to high- or low-intensity resistance training programs by predicting athlete's potential for the development of power and endurance qualities with the panel of 15 performance-associated gene polymorphisms. To develop and validate such an algorithm we performed two studies in independent cohorts of male athletes (study 1: athletes from different sports (n = 28); study 2: soccer players (n = 39)). In both studies athletes completed an eight-week high- or low-intensity resistance training program, which either matched or mismatched their individual genotype. Two variables of explosive power and aerobic fitness, as measured by the countermovement jump (CMJ) and aerobic 3-min cycle test (Aero3) were assessed pre and post 8 weeks of resistance training. In study 1, the athletes from the matched groups (i.e. high-intensity trained with power genotype or low-intensity trained with endurance genotype) significantly increased results in CMJ (P = 0.0005) and Aero3 (P = 0.0004). Whereas, athletes from the mismatched group (i.e. high-intensity trained with endurance genotype or low-intensity trained with power genotype) demonstrated non-significant improvements in CMJ (P = 0.175) and less prominent results in Aero3 (P = 0.0134). In study 2, soccer players from the matched group also demonstrated significantly greater (P < 0.0001) performance changes in both tests compared to the mismatched group. Among non- or low responders of both studies, 82% of athletes (both for CMJ and Aero3) were from the mismatched group (P < 0.0001). Our results indicate that matching the individual's genotype with the appropriate training modality leads to more effective resistance training. The developed algorithm may be used to guide individualised resistance-training interventions. PMID:27274104
Recognizing Age-Separated Face Images: Humans and Machines
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200
Recognizing age-separated face images: humans and machines.
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.
NASA Technical Reports Server (NTRS)
Brumfield, J. O.; Bloemer, H. H. L.; Campbell, W. J.
1981-01-01
Two unsupervised classification procedures for analyzing Landsat data used to monitor land reclamation in a surface mining area in east central Ohio are compared for agreement with data collected from the corresponding locations on the ground. One procedure is based on a traditional unsupervised-clustering/maximum-likelihood algorithm sequence that assumes spectral groupings in the Landsat data in n-dimensional space; the other is based on a nontraditional unsupervised-clustering/canonical-transformation/clustering algorithm sequence that not only assumes spectral groupings in n-dimensional space but also includes an additional feature-extraction technique. It is found that the nontraditional procedure provides an appreciable improvement in spectral groupings and apparently increases the level of accuracy in the classification of land cover categories.
Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory
2004-01-01
Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.
Quality control algorithms for rainfall measurements
NASA Astrophysics Data System (ADS)
Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs
2005-09-01
One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
Elements of an algorithm for optimizing a parameter-structural neural network
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2016-06-01
The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
Optimization of wireless sensor networks based on chicken swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Qingxi; Zhu, Lihua
2017-05-01
In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.
A quantum causal discovery algorithm
NASA Astrophysics Data System (ADS)
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Schmidt, Barbara; Roberts, Robin S; Whyte, Robin K; Asztalos, Elizabeth V; Poets, Christian; Rabi, Yacov; Solimano, Alfonso; Nelson, Harvey
2014-10-01
To compare oxygen saturations as displayed to caregivers on offset pulse oximeters in the 2 groups of the Canadian Oxygen Trial. In 5 double-blind randomized trials of oxygen saturation targeting, displayed saturations between 88% and 92% were offset by 3% above or below the true values but returned to true values below 84% and above 96%. During the transition, displayed values remained static at 96% in the lower and at 84% in the higher target group during a 3% change in true saturations. In contrast, displayed values changed rapidly from 88% to 84% in the lower and from 92% to 96% in the higher target group during a 1% change in true saturations. We plotted the distributions of median displayed saturations on days with >12 hours of supplemental oxygen in 1075 Canadian Oxygen Trial participants to reconstruct what caregivers observed at the bedside. The oximeter masking algorithm was associated with an increase in both stability and instability of displayed saturations that occurred during the transition between offset and true displayed values at opposite ends of the 2 target ranges. Caregivers maintained saturations at lower displayed values in the higher than in the lower target group. This differential management reduced the separation between the median true saturations in the 2 groups by approximately 3.5%. The design of the oximeter masking algorithm may have contributed to the smaller-than-expected separation between true saturations in the 2 study groups of recent saturation targeting trials in extremely preterm infants. Copyright © 2014 Elsevier Inc. All rights reserved.
Effect of patient selection method on provider group performance estimates.
Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A
2011-08-01
Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.
Traffic model for the satellite component of UMTS
NASA Technical Reports Server (NTRS)
Hu, Y. F.; Sheriff, R. E.
1995-01-01
An algorithm for traffic volume estimation for satellite mobile communications systems has been developed. This algorithm makes use of worldwide databases for demographic and economic data. In order to provide for such an estimation, the effects of competing services have been considered so that likely market demand can be forecasted. Different user groups of the predicted market have been identified according to expectations in the quality of services and mobility requirement. The number of users for different user groups are calculated taking into account the gross potential market, the penetration rate of the identified services and the profitability to provide such services via satellite.
Architectural mutation and leaf form, for the palmate series.
White, D A
2005-07-21
Palmate leaf form occurs in both the ferns and angiosperms. The palmate leaf form, and its variants, is present in distantly separated clades within both ferns and angiosperms. There tend not to be intermediate forms which link these palmate leaves to other leaf forms within the taxonomic groups in question. The recurrence of homoplasious leaf forms in separate taxonomic groups could be a consequence of the algorithmic like mode of leaf growth. Leaves develop through the reiteration of modular units. It is probable that the homoplasious leaf forms in different taxa are derived independently through re-combinations of the parameters in the basic leaf form development algorithm.
[Prevention of gastrointestinal bleeding in patients with advanced burns].
Vagner, D O; Krylov, K M; Verbitsky, V G; Shlyk, I V
2018-01-01
To reduce the incidence of gastrointestinal bleeding in patients with advanced burns by developing a prophylactic algorithm. The study consisted of retrospective group of 488 patients with thermal burns grade II-III over 20% of body surface area and prospective group of 135 patients with a similar thermal trauma. Standard clinical and laboratory examination was applied. Instrumental survey included fibrogastroduodenoscopy, endoscopic pH-metry and invasive volumetric monitoring (PICCO plus). Statistical processing was carried out with Microsoft Office Excel 2007 and IBM SPSS 20.0. New algorithm significantly decreased incidence of gastrointestinal bleeding (p<0.001) and mortality rate (p=0.006) in patients with advanced burns.
Caraco, Y; Blotnick, S; Muszkat, M
2008-03-01
Warfarin anticoagulation effect is characterized by marked variability, some of which has been attributed to CYP2C9 polymorphisms. This study prospectively examines whether a priori knowledge of CYP2C9 genotype may improve warfarin therapy. Patients were randomly assigned to receive warfarin by a validated algorithm ("control", 96 patients) or CYP2C9 genotype-adjusted algorithms ("study", 95 patients). The first therapeutic international normalized ratio and stable anticoagulation were reached 2.73 and 18.1 days earlier in the study group, respectively (P<0.001). The faster rate of initial anticoagulation was driven by a 28% higher daily dose in the study group (P<0.001). Study group patients spent more time within the therapeutic range (80.4 vs 63.4%, respectively, P<0.001) and experienced less minor bleeding (3.2 vs 12.5%, P<0.02, respectively). In conclusion, CYP2C9 genotype-guided warfarin therapy is more efficient and safer than the "average-dose" protocol. Future research should focus on construction of algorithms that incorporate other polymorphisms (VKORC1), host factors, and environmental influences.
Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes
Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María
2016-01-01
Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542
Stochastic Local Search for Core Membership Checking in Hedonic Games
NASA Astrophysics Data System (ADS)
Keinänen, Helena
Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.
2012-01-01
Background Hemorrhagic events are frequent in patients on treatment with antivitamin-K oral anticoagulants due to their narrow therapeutic margin. Studies performed with acenocoumarol have shown the relationship between demographic, clinical and genotypic variants and the response to these drugs. Once the influence of these genetic and clinical factors on the dose of acenocoumarol needed to maintain a stable international normalized ratio (INR) has been demonstrated, new strategies need to be developed to predict the appropriate doses of this drug. Several pharmacogenetic algorithms have been developed for warfarin, but only three have been developed for acenocoumarol. After the development of a pharmacogenetic algorithm, the obvious next step is to demonstrate its effectiveness and utility by means of a randomized controlled trial. The aim of this study is to evaluate the effectiveness and efficiency of an acenocoumarol dosing algorithm developed by our group which includes demographic, clinical and pharmacogenetic variables (VKORC1, CYP2C9, CYP4F2 and ApoE) in patients with venous thromboembolism (VTE). Methods and design This is a multicenter, single blind, randomized controlled clinical trial. The protocol has been approved by La Paz University Hospital Research Ethics Committee and by the Spanish Drug Agency. Two hundred and forty patients with VTE in which oral anticoagulant therapy is indicated will be included. Randomization (case/control 1:1) will be stratified by center. Acenocoumarol dose in the control group will be scheduled and adjusted following common clinical practice; in the experimental arm dosing will be following an individualized algorithm developed and validated by our group. Patients will be followed for three months. The main endpoints are: 1) Percentage of patients with INR within the therapeutic range on day seven after initiation of oral anticoagulant therapy; 2) Time from the start of oral anticoagulant treatment to achievement of a stable INR within the therapeutic range; 3) Number of INR determinations within the therapeutic range in the first six weeks of treatment. Discussion To date, there are no clinical trials comparing pharmacogenetic acenocoumarol dosing algorithm versus routine clinical practice in VTE. Implementation of this pharmacogenetic algorithm in the clinical practice routine could reduce side effects and improve patient safety. Trial registration Eudra CT. Identifier: 2009-016643-18. PMID:23237631
Morrison, C S; Sekadde-Kigondu, C; Miller, W C; Weiner, D H; Sinei, S K
1999-02-01
Sexually transmitted diseases (STD) are an important contraindication for intrauterine device (IUD) insertion. Nevertheless, laboratory testing for STD is not possible in many settings. The objective of this study is to evaluate the use of risk assessment algorithms to predict STD and subsequent IUD-related complications among IUD candidates. Among 615 IUD users in Kenya, the following algorithms were evaluated: 1) an STD algorithm based on US Agency for International Development (USAID) Technical Working Group guidelines: 2) a Centers for Disease Control and Prevention (CDC) algorithm for management of chlamydia; and 3) a data-derived algorithm modeled from study data. Algorithms were evaluated for prediction of chlamydial and gonococcal infection at 1 month and complications (pelvic inflammatory disease [PID], IUD removals, and IUD expulsions) over 4 months. Women with STD were more likely to develop complications than women without STD (19% vs 6%; risk ratio = 2.9; 95% CI 1.3-6.5). For STD prediction, the USAID algorithm was 75% sensitive and 48% specific, with a positive likelihood ratio (LR+) of 1.4. The CDC algorithm was 44% sensitive and 72% specific, LR+ = 1.6. The data-derived algorithm was 91% sensitive and 56% specific, with LR+ = 2.0 and LR- = 0.2. Category-specific LR for this algorithm identified women with very low (< 1%) and very high (29%) infection probabilities. The data-derived algorithm was also the best predictor of IUD-related complications. These results suggest that use of STD algorithms may improve selection of IUD users. Women at high risk for STD could be counseled to avoid IUD, whereas women at moderate risk should be monitored closely and counseled to use condoms.
Greenberg, Jacob K; Ladner, Travis R; Olsen, Margaret A; Shannon, Chevis N; Liu, Jingxia; Yarbrough, Chester K; Piccirillo, Jay F; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2015-08-01
The use of administrative billing data may enable large-scale assessments of treatment outcomes for Chiari Malformation type I (CM-1). However, to utilize such data sets, validated International Classification of Diseases, Ninth Revision (ICD-9-CM) code algorithms for identifying CM-1 surgery are needed. To validate 2 ICD-9-CM code algorithms identifying patients undergoing CM-1 decompression surgery. We retrospectively analyzed the validity of 2 ICD-9-CM code algorithms for identifying adult CM-1 decompression surgery performed at 2 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-1), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression, or laminectomy). Algorithm 2 restricted this group to patients with a primary diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Among 340 first-time admissions identified by Algorithm 1, the overall PPV for CM-1 decompression was 65%. Among the 214 admissions identified by Algorithm 2, the overall PPV was 99.5%. The PPV for Algorithm 1 was lower in the Vanderbilt (59%) cohort, males (40%), and patients treated between 2009 and 2013 (57%), whereas the PPV of Algorithm 2 remained high (≥99%) across subgroups. The sensitivity of Algorithms 1 (86%) and 2 (83%) were above 75% in all subgroups. ICD-9-CM code Algorithm 2 has excellent PPV and good sensitivity to identify adult CM-1 decompression surgery. These results lay the foundation for studying CM-1 treatment outcomes by using large administrative databases.
Machida, Haruhiko; Lin, Xiao-Zhu; Fukui, Rika; Shen, Yun; Suzuki, Shigeru; Tanaka, Isao; Ishikawa, Takuya; Tate, Etsuko; Ueno, Eiko
2015-02-01
We retrospectively investigated the effect of the motion correction algorithm (MCA) on image quality and interpretability by heart rate (HR) in coronary CT angiography (CCTA). For 105 patients (6 HR groups) undergoing CCTA, 2 readers independently graded the image quality of the 4 major coronary arteries reconstructed with and without MCA at diastole with HR ≤64 bpm and at systole and diastole ≥65 bpm using a 5-point scale. For each HR group and cardiac phase, we compared per-vessel and per-segment image quality using Wilcoxon signed rank test and percentages of interpretable image quality (scores 3-5) among without MCA at diastole with HR ≤64 bpm, as a reference, with MCA at diastole ≤69 bpm and at systole 70-79 bpm using the chi-square test. The motion correction algorithm reconstruction provided similar or better image quality and interpretability in all groups, with 96-100 % per-vessel (P = 0.008 for the right coronary artery; otherwise, P > 0.05) and 99 % per-segment interpretable image quality (P = 0.0002) at diastole with HR ≤69 bpm and at systole 70-79 bpm compared to the reference (88-100 and 97 %, respectively). MCA reconstruction preserved image quality and interpretability of CCTA with HR ≤79 bpm.
Scoring clustering solutions by their biological relevance.
Gat-Viks, I; Sharan, R; Shamir, R
2003-12-12
A central step in the analysis of gene expression data is the identification of groups of genes that exhibit similar expression patterns. Clustering gene expression data into homogeneous groups was shown to be instrumental in functional annotation, tissue classification, regulatory motif identification, and other applications. Although there is a rich literature on clustering algorithms for gene expression analysis, very few works addressed the systematic comparison and evaluation of clustering results. Typically, different clustering algorithms yield different clustering solutions on the same data, and there is no agreed upon guideline for choosing among them. We developed a novel statistically based method for assessing a clustering solution according to prior biological knowledge. Our method can be used to compare different clustering solutions or to optimize the parameters of a clustering algorithm. The method is based on projecting vectors of biological attributes of the clustered elements onto the real line, such that the ratio of between-groups and within-group variance estimators is maximized. The projected data are then scored using a non-parametric analysis of variance test, and the score's confidence is evaluated. We validate our approach using simulated data and show that our scoring method outperforms several extant methods, including the separation to homogeneity ratio and the silhouette measure. We apply our method to evaluate results of several clustering methods on yeast cell-cycle gene expression data. The software is available from the authors upon request.
Non-Algorithmic Issues in Automated Computational Mechanics
1991-04-30
Tworzydlo, Senior Research Engineer and Manager of Advanced Projects Group I. Professor I J. T. Oden, President and Senior Scientist of COMCO, was project...practical applications of the systems reported so far is due to the extremely arduous and complex development and management of a realistic knowledge base...software, designed to effectively implement deep, algorithmic knowledge, * and 0 "intelligent" software, designed to manage shallow, heuristic
Systolic Algorithms for Imaging from Space
1989-07-31
on a keystone or trapezoidal grid [ Arikan & Munson, 1987]. The image reconstruction algorithm then simply applies an inverse 2-D FFT to the stored...rithm composed of groups of point targets, and we determined the effects of windowing and incor- poration of a Jacobian weighting factor [ Arikan ...the impulse response of the desired filter [ Arikan & Munson, 1989]. The necessary filtering is then accomplished through the physical mechanism of the
CNES-NASA Studies of the Mars Sample Return Orbiter Aerocapture Phase
NASA Technical Reports Server (NTRS)
Fraysse, H.; Powell, R.; Rousseau, S.; Striepe, S.
2000-01-01
A Mars Sample Return (MSR) mission has been proposed as a joint CNES (Centre National d'Etudes Spatiales) and NASA effort in the ongoing Mars Exploration Program. The MSR mission is designed to return the first samples of Martian soil to Earth. The primary elements of the mission are a lander, rover, ascent vehicle, orbiter, and an Earth entry vehicle. The Orbiter has been allocated only 2700 kg on the launch phase to perform its part of the mission. This mass restriction has led to the decision to use an aerocapture maneuver at Mars for the orbiter. Aerocapture replaces the initial propulsive capture maneuver with a single atmospheric pass. This atmospheric pass will result in the proper apoapsis, but a periapsis raise maneuver is required at the first apoapsis. The use of aerocapture reduces the total mass requirement by approx. 45% for the same payload. This mission will be the first to use the aerocapture technique. Because the spacecraft is flying through the atmosphere, guidance algorithms must be developed that will autonomously provide the proper commands to reach the desired orbit while not violating any of the design parameters (e.g. maximum deceleration, maximum heating rate, etc.). The guidance algorithm must be robust enough to account for uncertainties in delivery states, atmospheric conditions, mass properties, control system performance, and aerodynamics. To study this very critical phase of the mission, a joint CNES-NASA technical working group has been formed. This group is composed of atmospheric trajectory specialists from CNES, NASA Langley Research Center and NASA Johnson Space Center. This working group is tasked with developing and testing guidance algorithms, as well as cross-validating CNES and NASA flight simulators for the Mars atmospheric entry phase of this mission. The final result will be a recommendation to CNES on the algorithm to use, and an evaluation of the flight risks associated with the algorithm. This paper will describe the aerocapture phase of the MSR mission, the main principles of the guidance algorithms that are under development, the atmospheric entry simulators developed for the evaluations, the process for the evaluations, and preliminary results from the evaluations.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2012-01-01
Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921
NASA Astrophysics Data System (ADS)
Niazmardi, S.; Safari, A.; Homayouni, S.
2017-09-01
Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.
Fang, Chen; Li, Chunfei; Cabrerizo, Mercedes; Barreto, Armando; Andrian, Jean; Rishe, Naphtali; Loewenstein, David; Duara, Ranjan; Adjouadi, Malek
2018-04-12
Over the past few years, several approaches have been proposed to assist in the early diagnosis of Alzheimer's disease (AD) and its prodromal stage of mild cognitive impairment (MCI). Using multimodal biomarkers for this high-dimensional classification problem, the widely used algorithms include Support Vector Machines (SVM), Sparse Representation-based classification (SRC), Deep Belief Networks (DBN) and Random Forest (RF). These widely used algorithms continue to yield unsatisfactory performance for delineating the MCI participants from the cognitively normal control (CN) group. A novel Gaussian discriminant analysis-based algorithm is thus introduced to achieve a more effective and accurate classification performance than the aforementioned state-of-the-art algorithms. This study makes use of magnetic resonance imaging (MRI) data uniquely as input to two separate high-dimensional decision spaces that reflect the structural measures of the two brain hemispheres. The data used include 190 CN, 305 MCI and 133 AD subjects as part of the AD Big Data DREAM Challenge #1. Using 80% data for a 10-fold cross-validation, the proposed algorithm achieved an average F1 score of 95.89% and an accuracy of 96.54% for discriminating AD from CN; and more importantly, an average F1 score of 92.08% and an accuracy of 90.26% for discriminating MCI from CN. Then, a true test was implemented on the remaining 20% held-out test data. For discriminating MCI from CN, an accuracy of 80.61%, a sensitivity of 81.97% and a specificity of 78.38% were obtained. These results show significant improvement over existing algorithms for discriminating the subtle differences between MCI participants and the CN group.
Crystal Symmetry Algorithms in a High-Throughput Framework for Materials
NASA Astrophysics Data System (ADS)
Taylor, Richard
The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.
Text grouping in patent analysis using adaptive K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Shanie, Tiara; Suprijadi, Jadi; Zulhanif
2017-03-01
Patents are one of the Intellectual Property. Analyzing patent is one requirement in knowing well the development of technology in each country and in the world now. This study uses the patent document coming from the Espacenet server about Green Tea. Patent documents related to the technology in the field of tea is still widespread, so it will be difficult for users to information retrieval (IR). Therefore, it is necessary efforts to categorize documents in a specific group of related terms contained therein. This study uses titles patent text data with the proposed Green Tea in Statistical Text Mining methods consists of two phases: data preparation and data analysis stage. The data preparation phase uses Text Mining methods and data analysis stage is done by statistics. Statistical analysis in this study using a cluster analysis algorithm, the Adaptive K-Means Clustering Algorithm. Results from this study showed that based on the maximum value Silhouette, generate 87 clusters associated fifteen terms therein that can be utilized in the process of information retrieval needs.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P
2017-11-25
Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.
Mearelli, Filippo; Fiotti, Nicola; Giansante, Carlo; Casarsa, Chiara; Orso, Daniele; De Helmersen, Marco; Altamura, Nicola; Ruscio, Maurizio; Castello, Luigi Mario; Colonetti, Efrem; Marino, Rossella; Barbati, Giulia; Bregnocchi, Andrea; Ronco, Claudio; Lupia, Enrico; Montrucchio, Giuseppe; Muiesan, Maria Lorenza; Di Somma, Salvatore; Avanzi, Gian Carlo; Biolo, Gianni
2018-05-07
To derive and validate a predictive algorithm integrating a nomogram-based prediction of the pretest probability of infection with a panel of serum biomarkers, which could robustly differentiate sepsis/septic shock from noninfectious systemic inflammatory response syndrome. Multicenter prospective study. At emergency department admission in five University hospitals. Nine-hundred forty-seven adults in inception cohort and 185 adults in validation cohort. None. A nomogram, including age, Sequential Organ Failure Assessment score, recent antimicrobial therapy, hyperthermia, leukocytosis, and high C-reactive protein values, was built in order to take data from 716 infected patients and 120 patients with noninfectious systemic inflammatory response syndrome to predict pretest probability of infection. Then, the best combination of procalcitonin, soluble phospholypase A2 group IIA, presepsin, soluble interleukin-2 receptor α, and soluble triggering receptor expressed on myeloid cell-1 was applied in order to categorize patients as "likely" or "unlikely" to be infected. The predictive algorithm required only procalcitonin backed up with soluble phospholypase A2 group IIA determined in 29% of the patients to rule out sepsis/septic shock with a negative predictive value of 93%. In a validation cohort of 158 patients, predictive algorithm reached 100% of negative predictive value requiring biomarker measurements in 18% of the population. We have developed and validated a high-performing, reproducible, and parsimonious algorithm to assist emergency department physicians in distinguishing sepsis/septic shock from noninfectious systemic inflammatory response syndrome.
Algorithm for covert convoy of a moving target using a group of autonomous robots
NASA Astrophysics Data System (ADS)
Polyakov, Igor; Shvets, Evgeny
2018-04-01
An important application of autonomous robot systems is to substitute human personnel in dangerous environments to reduce their involvement and subsequent risk on human lives. In this paper we solve the problem of covertly convoying a civilian in a dangerous area with a group of unmanned ground vehicles (UGVs) using social potential fields. The novelty of our work lies in the usage of UGVs as compared to the unmanned aerial vehicles typically employed for this task in the approaches described in literature. Additionally, in our paper we assume that the group of UGVs should simultaneously solve the problem of patrolling to detect intruders on the area. We develop a simulation system to test our algorithms, provide numerical results and give recommendations on how to tune the potentials governing robots' behaviour to prioritize between patrolling and convoying tasks.
Seizures in the elderly: development and validation of a diagnostic algorithm.
Dupont, Sophie; Verny, Marc; Harston, Sandrine; Cartz-Piver, Leslie; Schück, Stéphane; Martin, Jennifer; Puisieux, François; Alecu, Cosmin; Vespignani, Hervé; Marchal, Cécile; Derambure, Philippe
2010-05-01
Seizures are frequent in the elderly, but their diagnosis can be challenging. The objective of this work was to develop and validate an expert-based algorithm for the diagnosis of seizures in elderly people. A multidisciplinary group of neurologists and geriatricians developed a diagnostic algorithm using a combination of selected clinical, electroencephalographical and radiological criteria. The algorithm was validated by multicentre retrospective analysis of data of patients referred for specific symptoms and classified by the experts as epileptic patients or not. The algorithm was applied to all the patients, and the diagnosis provided by the algorithm was compared to the clinical diagnosis of the experts. Twenty-nine clinical, electroencephalographical and radiological criteria were selected for the algorithm. According to criteria combination, seizures were classified in four levels of diagnosis: certain, highly probable, possible or improbable. To validate the algorithm, the medical records of 269 elderly patients were analyzed (138 with epileptic seizures, 131 with non-epileptic manifestations). Patients were mainly referred for a transient focal deficit (40%), confusion (38%), unconsciousness (27%). The algorithm best classified certain and probable seizures versus possible and improbable seizures, with 86.2% sensitivity and 67.2% specificity. Using logistical regression, 2 simplified models were developed, the first with 13 criteria (Se 85.5%, Sp 90.1%), and the second with 7 criteria only (Se 84.8%, Sp 88.6%). In conclusion, the present study validated the use of a revised diagnostic algorithm to help diagnosis epileptic seizures in the elderly. A prospective study is planned to further validate this algorithm. Copyright 2010 Elsevier B.V. All rights reserved.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard
2008-01-01
Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205
Text Extraction from Scene Images by Character Appearance and Structure Modeling
Yi, Chucai; Tian, Yingli
2012-01-01
In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
Adaptive algorithm of magnetic heading detection
NASA Astrophysics Data System (ADS)
Liu, Gong-Xu; Shi, Ling-Feng
2017-11-01
Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Concave 1-norm group selection
Jiang, Dingfeng; Huang, Jian
2015-01-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206
Lee, Byung Moo
2017-12-29
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.
2017-01-01
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339
Counting in Lattices: Combinatorial Problems from Statistical Mechanics.
NASA Astrophysics Data System (ADS)
Randall, Dana Jill
In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of California dissertation year fellowship and Esprit working group "RAND". Part of this work was done while visiting ICSI and the University of Edinburgh.
H-PoP and H-PoPG: heuristic partitioning algorithms for single individual haplotyping of polyploids.
Xie, Minzhu; Wu, Qiong; Wang, Jianxin; Jiang, Tao
2016-12-15
Some economically important plants including wheat and cotton have more than two copies of each chromosome. With the decreasing cost and increasing read length of next-generation sequencing technologies, reconstructing the multiple haplotypes of a polyploid genome from its sequence reads becomes practical. However, the computational challenge in polyploid haplotyping is much greater than that in diploid haplotyping, and there are few related methods. This article models the polyploid haplotyping problem as an optimal poly-partition problem of the reads, called the Polyploid Balanced Optimal Partition model. For the reads sequenced from a k-ploid genome, the model tries to divide the reads into k groups such that the difference between the reads of the same group is minimized while the difference between the reads of different groups is maximized. When the genotype information is available, the model is extended to the Polyploid Balanced Optimal Partition with Genotype constraint problem. These models are all NP-hard. We propose two heuristic algorithms, H-PoP and H-PoPG, based on dynamic programming and a strategy of limiting the number of intermediate solutions at each iteration, to solve the two models, respectively. Extensive experimental results on simulated and real data show that our algorithms can solve the models effectively, and are much faster and more accurate than the recent state-of-the-art polyploid haplotyping algorithms. The experiments also show that our algorithms can deal with long reads and deep read coverage effectively and accurately. Furthermore, H-PoP might be applied to help determine the ploidy of an organism. https://github.com/MinzhuXie/H-PoPG CONTACT: xieminzhu@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hierarchical group testing for multiple infections.
Hou, Peijie; Tebbs, Joshua M; Bilder, Christopher R; McMahan, Christopher S
2017-06-01
Group testing, where individuals are tested initially in pools, is widely used to screen a large number of individuals for rare diseases. Triggered by the recent development of assays that detect multiple infections at once, screening programs now involve testing individuals in pools for multiple infections simultaneously. Tebbs, McMahan, and Bilder (2013, Biometrics) recently evaluated the performance of a two-stage hierarchical algorithm used to screen for chlamydia and gonorrhea as part of the Infertility Prevention Project in the United States. In this article, we generalize this work to accommodate a larger number of stages. To derive the operating characteristics of higher-stage hierarchical algorithms with more than one infection, we view the pool decoding process as a time-inhomogeneous, finite-state Markov chain. Taking this conceptualization enables us to derive closed-form expressions for the expected number of tests and classification accuracy rates in terms of transition probability matrices. When applied to chlamydia and gonorrhea testing data from four states (Region X of the United States Department of Health and Human Services), higher-stage hierarchical algorithms provide, on average, an estimated 11% reduction in the number of tests when compared to two-stage algorithms. For applications with rarer infections, we show theoretically that this percentage reduction can be much larger. © 2016, The International Biometric Society.
Retinopathy of Prematurity-assist: Novel Software for Detecting Plus Disease
Pour, Elias Khalili; Pourreza, Hamidreza; Zamani, Kambiz Ameli; Mahmoudi, Alireza; Sadeghi, Arash Mir Mohammad; Shadravan, Mahla; Karkhaneh, Reza; Pour, Ramak Rouhi
2017-01-01
Purpose To design software with a novel algorithm, which analyzes the tortuosity and vascular dilatation in fundal images of retinopathy of prematurity (ROP) patients with an acceptable accuracy for detecting plus disease. Methods Eighty-seven well-focused fundal images taken with RetCam were classified to three groups of plus, non-plus, and pre-plus by agreement between three ROP experts. Automated algorithms in this study were designed based on two methods: the curvature measure and distance transform for assessment of tortuosity and vascular dilatation, respectively as two major parameters of plus disease detection. Results Thirty-eight plus, 12 pre-plus, and 37 non-plus images, which were classified by three experts, were tested by an automated algorithm and software evaluated the correct grouping of images in comparison to expert voting with three different classifiers, k-nearest neighbor, support vector machine and multilayer perceptron network. The plus, pre-plus, and non-plus images were analyzed with 72.3%, 83.7%, and 84.4% accuracy, respectively. Conclusions The new automated algorithm used in this pilot scheme for diagnosis and screening of patients with plus ROP has acceptable accuracy. With more improvements, it may become particularly useful, especially in centers without a skilled person in the ROP field. PMID:29022295
Hierarchical group testing for multiple infections
Hou, Peijie; Tebbs, Joshua M.; Bilder, Christopher R.; McMahan, Christopher S.
2016-01-01
Summary Group testing, where individuals are tested initially in pools, is widely used to screen a large number of individuals for rare diseases. Triggered by the recent development of assays that detect multiple infections at once, screening programs now involve testing individuals in pools for multiple infections simultaneously. Tebbs, McMahan, and Bilder (2013, Biometrics) recently evaluated the performance of a two-stage hierarchical algorithm used to screen for chlamydia and gonorrhea as part of the Infertility Prevention Project in the United States. In this article, we generalize this work to accommodate a larger number of stages. To derive the operating characteristics of higher-stage hierarchical algorithms with more than one infection, we view the pool decoding process as a time-inhomogeneous, finite-state Markov chain. Taking this conceptualization enables us to derive closed-form expressions for the expected number of tests and classification accuracy rates in terms of transition probability matrices. When applied to chlamydia and gonorrhea testing data from four states (Region X of the United States Department of Health and Human Services), higher-stage hierarchical algorithms provide, on average, an estimated 11 percent reduction in the number of tests when compared to two-stage algorithms. For applications with rarer infections, we show theoretically that this percentage reduction can be much larger. PMID:27657666
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Lehr, M E; Plisky, P J; Butler, R J; Fink, M L; Kiesel, K B; Underwood, F B
2013-08-01
In athletics, efficient screening tools are sought to curb the rising number of noncontact injuries and associated health care costs. The authors hypothesized that an injury prediction algorithm that incorporates movement screening performance, demographic information, and injury history can accurately categorize risk of noncontact lower extremity (LE) injury. One hundred eighty-three collegiate athletes were screened during the preseason. The test scores and demographic information were entered into an injury prediction algorithm that weighted the evidence-based risk factors. Athletes were then prospectively followed for noncontact LE injury. Subsequent analysis collapsed the groupings into two risk categories: Low (normal and slight) and High (moderate and substantial). Using these groups and noncontact LE injuries, relative risk (RR), sensitivity, specificity, and likelihood ratios were calculated. Forty-two subjects sustained a noncontact LE injury over the course of the study. Athletes identified as High Risk (n = 63) were at a greater risk of noncontact LE injury (27/63) during the season [RR: 3.4 95% confidence interval 2.0 to 6.0]. These results suggest that an injury prediction algorithm composed of performance on efficient, low-cost, field-ready tests can help identify individuals at elevated risk of noncontact LE injury. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Jafari Tadi, Mojtaba; Koivisto, Tero; Pänkäälä, Mikko; Paasio, Ari; Knuutila, Timo; Teräs, Mika; Hänninen, Pekka
2015-03-01
Systolic time intervals (STI) have significant diagnostic values for a clinical assessment of the left ventricle in adults. This study was conducted to explore the feasibility of using seismocardiography (SCG) to measure the systolic timings of the cardiac cycle accurately. An algorithm was developed for the automatic localization of the cardiac events (e.g. the opening and closing moments of the aortic and mitral valves). Synchronously acquired SCG and electrocardiography (ECG) enabled an accurate beat to beat estimation of the electromechanical systole (QS2), pre-ejection period (PEP) index and left ventricular ejection time (LVET) index. The performance of the algorithm was evaluated on a healthy test group with no evidence of cardiovascular disease (CVD). STI values were corrected based on Weissler's regression method in order to assess the correlation between the heart rate and STIs. One can see from the results that STIs correlate poorly with the heart rate (HR) on this test group. An algorithm was developed to visualize the quiescent phases of the cardiac cycle. A color map displaying the magnitude of SCG accelerations for multiple heartbeats visualizes the average cardiac motions and thereby helps to identify quiescent phases. High correlation between the heart rate and the duration of the cardiac quiescent phases was observed.
NASA Astrophysics Data System (ADS)
Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.
2017-08-01
A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.
Anomaly-Related Pathologic Atlantoaxial Displacement in Pediatric Patients.
Pavlova, Olga M; Ryabykh, Sergey O; Burcev, Alexander V; Gubin, Alexander V
2018-06-01
To analyze clinical and radiologic features of pathologic atlantoaxial displacement (PAAD) in pediatric patients and to compose a treatment algorithm for anomaly-related PAAD. Criteria of different types of PAAD and treatment algorithms have been widely reported in the literature but are difficult to apply to patients with odontoid abnormalities, C2-C3 block, spina bifida C1, and children. We evaluated results of treatment of 29 pediatric patients with PAAD caused by congenital anomalies of the craniovertebral junction (CVJ), treated in Ilizarov Center in 2009-2017, including 20 patients with atlantoaxial displacement (AAD) and 9 patients with atlantoaxial rotatory fixation. There were 14 males (48.3%) and 15 females (51.7%). We singled out 3 groups of patients: nonsyndromic (6 patients, 20.7%), Klippel-Feil syndrome (13 patients, 44.8%), and syndromic (10 patients, 34.5%). Odontoid abnormalities and C1 dysplasia were widely represented in the syndromic group. Local symptoms predominated in the nonsyndromic and KFS groups. In the syndromic group, all patients had AAD and myelopathy. A pronounced decrease of space available for chord C1 and increase of anterior atlantodental interval were noted compared with other groups. We present a unified treatment algorithm of pediatric anomaly-related PAAD. Syndromic AAD are often accompanied by anterior and central dislocation and myelopathy and atlantooccipital dissociation. These patients require early aggressive surgical treatment. Nonsyndromic and Klippel-Feil syndrome AAD, atlantoaxial subluxation, and atlantoaxial fixation often manifest by local symptoms and need to eliminate CVJ instability. Existing classifications of symptomatic atlantoaxial displacement are not always suitable for patients with CVJ abnormalities. Copyright © 2018 Elsevier Inc. All rights reserved.
Aslanides, Ioannis M; Toliou, Georgia; Padroni, Sara; Arba Mosquera, Samuel; Kolli, Sai
2011-06-01
To compare the refractive and visual outcomes using the Schwind Amaris excimer laser in patients with high astigmatism (>1D) with and without the static cyclotorsion compensation (SCC) algorithm available with this new laser platform. 70 consecutive eyes with ≥1D astigmatism were randomized to treatment with compensation of static cyclotorsion (SCC group- 35 eyes) or not (control group- 35 eyes). A previously validated optimized aspheric ablation algorithm profile was used in every case. All patients underwent LASIK with a microkeratome cut flap. The SCC and control group did not differ preoperatively, in terms of refractive error, magnitude of astigmatism or in terms of cardinal or oblique astigmatism. Following treatment, average deviation from target was SEq +0.16D, SD±0.52 D, range -0.98 D to +1.71 D in the SCC group compared to +0.46 D, SD±0.61 D, range -0.25 D to +2.35 D in the control group, which was statistically significant (p<0.05). Following treatment, average astigmatism was 0.24 D (SD±0.28 D, range -1.01 D to 0.00 D) in the SCC group compared to 0.46 D (SD±0.42 D, range -1.80 D to 0.00 D) in the control group, which was highly statistically significant (p<0.005). There was no statistical difference in the postoperative uncorrected vision when the aspheric algorithm was used although there was a trend to increased number of lines gained in the SCC group. This study shows that static cyclotorsion is accurately compensated for by the Schwind Amaris laser platform. The compensation of static cyclotorsion in patients with moderate astigmatism produces a significant improvement in refractive and astigmatic outcomes than when not compensated. Copyright © 2011 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
[Algorithms of artificial neural networks--practical application in medical science].
Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna
2005-12-01
Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.
Analysis of High-Throughput ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Zangar, Richard C.
Our research group develops analytical methods and software for the high-throughput analysis of quantitative enzyme-linked immunosorbent assay (ELISA) microarrays. ELISA microarrays differ from DNA microarrays in several fundamental aspects and most algorithms for analysis of DNA microarray data are not applicable to ELISA microarrays. In this review, we provide an overview of the steps involved in ELISA microarray data analysis and how the statistically sound algorithms we have developed provide an integrated software suite to address the needs of each data-processing step. The algorithms discussed are available in a set of open-source software tools (http://www.pnl.gov/statistics/ProMAT).
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Crammer, Koby; Singer, Yoram
2005-01-01
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.
sp3-hybridized framework structure of group-14 elements discovered by genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai-Zhuang
2014-05-01
Group-14 elements, including C, Si, Ge, and Sn, can form various stable and metastable structures. Finding new metastable structures of group-14 elements with desirable physical properties for new technological applications has attracted a lot of interest. Using a genetic algorithm, we discovered a new low-energy metastable distorted sp3-hybridized framework structure of the group-14 elements. It has P42/mnm symmetry with 12 atoms per unit cell. The void volume of this structure is as large as 139.7Å3 for Si P42/mnm, and it can be used for gas or metal-atom encapsulation. Band-structure calculations show that P42/mnm structures of Si and Ge are semiconductingmore » with energy band gaps close to the optimal values for optoelectronic or photovoltaic applications. With metal-atom encapsulation, the P42/mnm structure would also be a candidate for rattling-mediated superconducting or used as thermoelectric materials.« less
Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms
NASA Astrophysics Data System (ADS)
Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho
The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.
Stallings-Welden, Lois M; Doerner, Mary; Ketchem, Elizabeth Libby; Benkert, Laura; Alka, Susan; Stallings, Jonathan D
2018-04-01
To determine effectiveness of aromatherapy (AT) compared with standard care (SC) for postoperative and postdischarge nausea and vomiting (PONV/PDNV) in ambulatory surgical patients. Prospective randomized study. Patients (n = 254) received either SC or AT for PONV and interviewed for effectiveness of PDNV. Machine learning methods (eight algorithms) were used to evaluate. Of patients (64 of 221) that experienced PONV, 52% were in the AT group and 48% in the SC group. The majority were satisfied with treatment (timely, P = .60; effectiveness, P = .86). Of patients that experienced PDNV, treatment was 100% effective in the AT group and 67% in the SC group. The cforest algorithm was used to develop a model for predicting PONV with literature-based risk factors (0.69 area under the curve). AT is an effective way to manage PONV/PDNV. Gender and age were the most important predictors of PONV. Copyright © 2016 American Society of PeriAnesthesia Nurses. All rights reserved.
Christodoulou, Asterios; Mikrogeorgis, Georgios; Vouzara, Triantafillia; Papachristou, Konstantinos; Angelopoulos, Christos; Nikolaidis, Nikolaos; Pitas, Ioannis; Lyroudia, Kleoniki
2018-02-15
In this study, the three-dimensional (3D) modification of root canal curvature was measured, after the application of Reciproc instrumentation technique, by using cone beam computed tomography (CBCT) imaging and a special algorithm developed for the 3D measurement of the curvature of the root canal. Thirty extracted upper molars were selected. Digital radiographs for each tooth were taken. Root curvature was measured by using Schneider method and they were divided into three groups, each one consisting of 10 roots, according to their curvature: Group 1 (0°-20°), Group 2 (21°-40°), Group 3 (41°-60°). CBCT imaging was applied to each tooth before and after its instrumentation, and the data were examined by using a specially developed CBCT image analysis algorithm. The instrumentation with Reciproc led to a decrease of the curvature by 30.23% (on average) in all groups. The proposed methodology proved to be able to measure the curvature of the root canal and its 3D modification after the instrumentation.
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi
2015-01-01
The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.
Private algorithms for the protected in social network search
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-01
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606
Private algorithms for the protected in social network search.
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-26
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.
Uni10: an open-source library for tensor network algorithms
NASA Astrophysics Data System (ADS)
Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung
2015-09-01
We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.
Identifying online user reputation of user-object bipartite networks
NASA Astrophysics Data System (ADS)
Liu, Xiao-Lu; Liu, Jian-Guo; Yang, Kai; Guo, Qiang; Han, Jing-Ti
2017-02-01
Identifying online user reputation based on the rating information of the user-object bipartite networks is important for understanding online user collective behaviors. Based on the Bayesian analysis, we present a parameter-free algorithm for ranking online user reputation, where the user reputation is calculated based on the probability that their ratings are consistent with the main part of all user opinions. The experimental results show that the AUC values of the presented algorithm could reach 0.8929 and 0.8483 for the MovieLens and Netflix data sets, respectively, which is better than the results generated by the CR and IARR methods. Furthermore, the experimental results for different user groups indicate that the presented algorithm outperforms the iterative ranking methods in both ranking accuracy and computation complexity. Moreover, the results for the synthetic networks show that the computation complexity of the presented algorithm is a linear function of the network size, which suggests that the presented algorithm is very effective and efficient for the large scale dynamic online systems.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Spencer, Kevin; Nicolaides, Kypros H
2002-10-01
This study examines 45 cases of trisomy 13 and 59 cases of trisomy 18 and reports an algorithm to identify pregnancies with a fetus affected by trisomy 13 or 18 by a combination of maternal age fetal nuchal translucency (NT) thickness, and maternal serum free beta-hCG and PAPP-A at 11-14 weeks of gestation. In this mixed trisomy group the median MoM NT was increased at 2.819, whilst the median MoMs for free beta-hCG and PAPP-A were reduced at 0.375 and 0.201 respectively. We predict that with the use of the combined trisomy 13 and 18 algorithm and a risk cut-off of 1 in 150 will for a 0.3% false positive rate allow 95% of these chromosomal defects to be identified at 11-14 weeks. Such algorithms will enhance existing first trimester screening algorithms for trisomy 21. Copyright 2002 John Wiley & Sons, Ltd.
Pavón, Margarita Valencia; Cucina, Andrea; Tiesler, Vera
2010-03-01
This study develops new histomorphological algorithms for Maya populations' human ribs and tests the applicability of published algorithms. Thin sections from the fourth rib of 36 individuals of known age were analyzed under polarized light microscopy. Osteon population density (OPD, the concentration of intact and fragmented osteons per mm(2)), cortical area (CA), and osteon size (OS) were recorded. Seven algorithms were calculated, using all combinations of variables, and compared to the performance of published formulas. The OPD-based formulas deviate from the known age 8.7 years on average, while those from OS and CA deviate between 10.7 and 12.8 years. In comparison, our OPD-based algorithms perform better than the one by Stout and Paine and much better than Cho et al. In conclusion, algorithms should be developed using OPD for different ethnic groups; although Stout and Paine's can be used for Maya and maybe Mesoamerican individuals.
A Greedy Algorithm for Brain MRI's Registration.
Chesseboeuf, Clément
2016-12-01
This document presents a non-rigid registration algorithm for the use of brain magnetic resonance (MR) images comparison. More precisely, we want to compare pre-operative and post-operative MR images in order to assess the deformation due to a surgical removal. The proposed algorithm has been studied in Chesseboeuf et al. ((Non-rigid registration of magnetic resonance imaging of brain. IEEE, 385-390. doi: 10.1109/IPTA.2015.7367172 , 2015), following ideas of Trouvé (An infinite dimensional group approach for physics based models in patterns recognition. Technical Report DMI Ecole Normale Supérieure, Cachan, 1995), in which the author introduces the algorithm within a very general framework. Here we recalled this theory from a practical point of view. The emphasis is on illustrations and description of the numerical procedure. Our version of the algorithm is associated with a particular matching criterion. Then, a section is devoted to the description of this object. In the last section we focus on the construction of a statistical method of evaluation.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
An efficient parallel-processing method for transposing large matrices in place.
Portnoff, M R
1999-01-01
We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.
A fingerprint classification algorithm based on combination of local and global information
NASA Astrophysics Data System (ADS)
Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu
2011-12-01
Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui, Cheukkai; Suh, Yelin; Robertson, Daniel
Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signalsmore » is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt{sub ins}) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt{sub ins} were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt{sub ins} was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt{sub ins} greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective sorting of 4D CT data. The algorithm is completely automatic and requires very little processing time. The algorithm is cost efficient and can be easily adopted for everyday clinical use.« less
Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar
2013-06-24
A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
OGUPSA sensor scheduling architecture and algorithm
NASA Astrophysics Data System (ADS)
Zhang, Zhixiong; Hintz, Kenneth J.
1996-06-01
This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.
On improving linear solver performance: a block variant of GMRES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Dennis, J M; Jessup, E R
2004-05-10
The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system Ax = b based on solving the block linear system AX = B. Algorithm performance, i.e. time to solution, is improved by using the matrix A in operations on groups of vectors.more » Experimental results demonstrate the importance of implementation choices on data movement as well as the effectiveness of the new method on a variety of problems from different application areas.« less
Merging Sounder and Imager Data for Improved Cloud Depiction on SNPP and JPSS.
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Holz, R.; Li, Y.; Platnick, S. E.; Wanzong, S.
2017-12-01
Under the NOAA GOES-R Algorithm Working Group (AWG) Program, NOAA supports the development of an Infrared (IR) Optimal Estimation (OE) Cloud Height Algorithm (ACHA). ACHA is an enterprise solution that supports many geostationary and polar orbiting imager sensors. ACHA is operational at NOAA on SNPP VIIRS and has been adopted as the cloud height algorithm for the NASA NPP Atmospheric Suite of products. Being an OE algorithm, ACHA is flexible and capable of using additional observations and constraints. We have modified ACHA to use sounder (CriS) observations to improve the cloud detection, typing and height estimation. Specifically, these improvements include retrievals in multi-layer scenarios and improved performance in polar regions. This presentation will describe the process for merging VIIRS and CrIS and a demonstration of the improvements.
NASA Astrophysics Data System (ADS)
Sorokin, V. A.; Volkov, Yu V.; Sherstneva, A. I.; Botygin, I. A.
2016-11-01
This paper overviews a method of generating climate regions based on an analytic signal theory. When applied to atmospheric surface layer temperature data sets, the method allows forming climatic structures with the corresponding changes in the temperature to make conclusions on the uniformity of climate in an area and to trace the climate changes in time by analyzing the type group shifts. The algorithm is based on the fact that the frequency spectrum of the thermal oscillation process is narrow-banded and has only one mode for most weather stations. This allows using the analytic signal theory, causality conditions and introducing an oscillation phase. The annual component of the phase, being a linear function, was removed by the least squares method. The remaining phase fluctuations allow consistent studying of their coordinated behavior and timing, using the Pearson correlation coefficient for dependence evaluation. This study includes program experiments to evaluate the calculation efficiency in the phase grouping task. The paper also overviews some single-threaded and multi-threaded computing models. It is shown that the phase grouping algorithm for meteorological data can be parallelized and that a multi-threaded implementation leads to a 25-30% increase in the performance.
Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1991-01-01
The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Collaboration space division in collaborative product development based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Qian, Xueming; Ma, Yanqiao; Feng, Huan
2018-02-01
The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
In many military and homeland security persistent surveillance applications, accurate detection of different skin colors in varying observability and illumination conditions is a valuable capability for video analytics. One of those applications is In-Vehicle Group Activity (IVGA) recognition, in which significant changes in observability and illumination may occur during the course of a specific human group activity of interest. Most of the existing skin color detection algorithms, however, are unable to perform satisfactorily in confined operational spaces with partial observability and occultation, as well as under diverse and changing levels of illumination intensity, reflection, and diffraction. In this paper, we investigate the salient features of ten popular color spaces for skin subspace color modeling. More specifically, we examine the advantages and disadvantages of each of these color spaces, as well as the stability and suitability of their features in differentiating skin colors under various illumination conditions. The salient features of different color subspaces are methodically discussed and graphically presented. Furthermore, we present robust and adaptive algorithms for skin color detection based on this analysis. Through examples, we demonstrate the efficiency and effectiveness of these new color skin detection algorithms and discuss their applicability for skin detection in IVGA recognition applications.
Chang, Chih-Hua
2015-03-09
This paper proposes new inversion algorithms for the estimation of Chlorophyll-a concentration (Chla) and the ocean's inherent optical properties (IOPs) from the measurement of remote sensing reflectance (Rrs). With in situ data from the NASA bio-optical marine algorithm data set (NOMAD), inversion algorithms were developed by the novel gene expression programming (GEP) approach, which creates, manipulates and selects the most appropriate tree-structured functions based on evolutionary computing. The limitations and validity of the proposed algorithms are evaluated by simulated Rrs spectra with respect to NOMAD, and a closure test for IOPs obtained at a single reference wavelength. The application of GEP-derived algorithms is validated against in situ, synthetic and satellite match-up data sets compiled by NASA and the International Ocean Color Coordinate Group (IOCCG). The new algorithms are able to provide Chla and IOPs retrievals to those derived by other state-of-the-art regression approaches and obtained with the semi- and quasi-analytical algorithms, respectively. In practice, there are no significant differences between GEP, support vector regression, and multilayer perceptron model in terms of the overall performance. The GEP-derived algorithms are successfully applied in processing the images taken by the Sea Wide Field-of-view Sensor (SeaWiFS), generate Chla and IOPs maps which show better details of developing algal blooms, and give more information on the distribution of water constituents between different water bodies.
A global optimization algorithm inspired in the behavior of selfish herds.
Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián
2017-10-01
In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.