Science.gov

Sample records for deutsch-jozsa algorithm implemented

  1. Implementing the Deutsch-Jozsa algorithm with macroscopic ensembles

    NASA Astrophysics Data System (ADS)

    Semenenko, Henry; Byrnes, Tim

    2016-05-01

    Quantum computing implementations under consideration today typically deal with systems with microscopic degrees of freedom such as photons, ions, cold atoms, and superconducting circuits. The quantum information is stored typically in low-dimensional Hilbert spaces such as qubits, as quantum effects are strongest in such systems. It has, however, been demonstrated that quantum effects can be observed in mesoscopic and macroscopic systems, such as nanomechanical systems and gas ensembles. While few-qubit quantum information demonstrations have been performed with such macroscopic systems, a quantum algorithm showing exponential speedup over classical algorithms is yet to be shown. Here, we show that the Deutsch-Jozsa algorithm can be implemented with macroscopic ensembles. The encoding that we use avoids the detrimental effects of decoherence that normally plagues macroscopic implementations. We discuss two mapping procedures which can be chosen depending upon the constraints of the oracle and the experiment. Both methods have an exponential speedup over the classical case, and only require control of the ensembles at the level of the total spin of the ensembles. It is shown that both approaches reproduce the qubit Deutsch-Jozsa algorithm, and are robust under decoherence.

  2. Graphene-based room-temperature implementation of a modified Deutsch-Jozsa quantum algorithm.

    PubMed

    Dragoman, Daniela; Dragoman, Mircea

    2015-12-04

    We present an implementation of a one-qubit and two-qubit modified Deutsch-Jozsa quantum algorithm based on graphene ballistic devices working at room temperature. The modified Deutsch-Jozsa algorithm decides whether a function, equivalent to the effect of an energy potential distribution on the wave function of ballistic charge carriers, is constant or not, without measuring the output wave function. The function need not be Boolean. Simulations confirm that the algorithm works properly, opening the way toward quantum computing at room temperature based on the same clean-room technologies as those used for fabrication of very-large-scale integrated circuits.

  3. Scheme for implementing the Deutsch-Jozsa algorithm in cavity QED

    SciTech Connect

    Zheng Shibiao

    2004-09-01

    We propose a scheme for realizing the Deutsch-Jozsa algorithm in cavity QED. The scheme is based on the resonant interaction of atoms with a cavity mode. The required experimental techniques are within the scope of what can be obtained in the microwave cavity QED setup. The experimental implementation of the scheme would be an important step toward more complex quantum computation in cavity QED.

  4. Braiding of Atomic Majorana Fermions in Wire Networks and Implementation of the Deutsch-Jozsa Algorithm

    NASA Astrophysics Data System (ADS)

    Kraus, Christina V.; Zoller, P.; Baranov, Mikhail A.

    2013-11-01

    We propose an efficient protocol for braiding Majorana fermions realized as edge states in atomic wire networks, and demonstrate its robustness against experimentally relevant errors. The braiding of two Majorana fermions located on one side of two adjacent wires requires only a few local operations on this side which can be implemented using local site addressing available in current experiments with cold atoms and molecules. Based on this protocol we provide an experimentally feasible implementation of the Deutsch-Jozsa algorithm for two qubits in a topologically protected way.

  5. Deterministic implementations of single-photon multi-qubit Deutsch-Jozsa algorithms with linear optics

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Liu, Ji-Zhen

    2017-02-01

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  6. Experimental implementation of the Deutsch-Jozsa algorithm for three-qubit functions using pure coherent molecular superpositions

    SciTech Connect

    Vala, Jiri; Kosloff, Ronnie; Amitay, Zohar; Zhang Bo; Leone, Stephen R.

    2002-12-01

    The Deutsch-Jozsa algorithm is experimentally demonstrated for three-qubit functions using pure coherent superpositions of Li{sub 2} rovibrational eigenstates. The function's character, either constant or balanced, is evaluated by first imprinting the function, using a phase-shaped femtosecond pulse, on a coherent superposition of the molecular states, and then projecting the superposition onto an ionic final state, using a second femtosecond pulse at a specific time delay.

  7. A different Deutsch-Jozsa

    NASA Astrophysics Data System (ADS)

    Bera, Debajyoti

    2015-06-01

    One of the early achievements of quantum computing was demonstrated by Deutsch and Jozsa (Proc R Soc Lond A Math Phys Sci 439(1907):553, 1992) regarding classification of a particular type of Boolean functions. Their solution demonstrated an exponential speedup compared to classical approaches to the same problem; however, their solution was the only known quantum algorithm for that specific problem so far. This paper demonstrates another quantum algorithm for the same problem, with the same exponential advantage compared to classical algorithms. The novelty of this algorithm is the use of quantum amplitude amplification, a technique that is the key component of another celebrated quantum algorithm developed by Grover (Proceedings of the twenty-eighth annual ACM symposium on theory of computing, ACM Press, New York, 1996). A lower bound for randomized (classical) algorithms is also presented which establishes a sound gap between the effectiveness of our quantum algorithm and that of any randomized algorithm with similar efficiency.

  8. Modular Universal Scalable Ion-trap Quantum Computer

    DTIC Science & Technology

    2016-06-02

    errors (~2%). Using this system, we have implemented the Deutsch -Jozsa algorithm, the Bernstein- Vazirani algorithm, and the quantum Fourier...transform algorithm on five physical qubits. Figure 5 shows the results for running both the Deutsch -Jozsa algorithm (for Figure 5a and b) and the...the Deutsch - Jozsa algorithm. For the four-qubit Bernstein-Vazirani algorithm, the single-shot detection for the correct estimated sequence is

  9. The Deutch-Jozsa algorithm as a suitable framework for MapReduce in a quantum computer

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    The essence of the MapReduce paradigm is a parallel, distributed algorithm across hundreds or thousands machines. In crude fashion this parallelism reminds us of the method of computation by quantum parallelism which is possible only with quantum computers. Deutsch and Jozsa showed that there is a class of problems which can be solved more efficiently by quantum computer than by any classical or stochastic method. The method of computation by quantum parallelism solves the problem with certainty in exponentially less time than any classical computation. This leads to question would it be possible to implement the MapReduce paradigm in a quantum computer and harness this incredible speedup over the classical computation performed by the current computers. Although present quantum computers are not robust enough for code writing and execution, it is worth to explore this question from a theoretical point of view. We will show from a theoretical point of view that the Deutsch-Jozsa algorithm is a suitable framework to implement the MapReduce paradigm in a quantum computer.

  10. A strategy for quantum algorithm design assisted by machine learning

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  11. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  12. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer.

  13. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  14. Deterministic quantum computation with one photonic qubit

    NASA Astrophysics Data System (ADS)

    Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.

    2015-07-01

    We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.

  15. Implementation details of the coupled QMR algorithm

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1992-01-01

    The original quasi-minimal residual method (QMR) relies on the three-term look-ahead Lanczos process, to generate basis vectors for the underlying Krylov subspaces. However, empirical observations indicate that, in finite precision arithmetic, three-term vector recurrences are less robust than mathematically equivalent coupled two-term recurrences. Therefore, we recently proposed a new implementation of the QMR method based on a coupled two-term look-ahead Lanczos procedure. In this paper, we describe implementation details of this coupled QMR algorithm, and we present results of numerical experiments.

  16. Terascale spectral element algorithms and implementations.

    SciTech Connect

    Fischer, P. F.; Tufo, H. M.

    1999-08-17

    We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.

  17. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  18. Implementation of a Wavefront-Sensing Algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce; Aronstein, David

    2013-01-01

    A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.

  19. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  20. Dual format algorithm implementation with gotcha data

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Rigling, Brian D.

    2012-05-01

    The Dual Format Algorithm (DFA) is an alternative to the Polar Format Algorithm (PFA) where the image is formed first to an arbitrary grid instead of a Cartesian grid. The arbitrary grid is specifically chosen to allow for more efficient application of defocus and distortion corrections that occur due to range curvature. We provide a description of the arbitrary image grid and show that the quadratic phase errors are isolated along a single dimension of the image. We describe an application of the DFA to circular SAR data and analyze the image focus. For an example SAR dataset, the DFA doubles the focused image size of the PFA algorithm with post imaging corrections.

  1. New algorithms for phase unwrapping: implementation and testing

    NASA Astrophysics Data System (ADS)

    Kotlicki, Krzysztof

    1998-11-01

    In this paper it is shown how the regularization theory was used for the new noise immune algorithm for phase unwrapping. The algorithm were developed by M. Servin, J.L. Marroquin and F.J. Cuevas in Centro de Investigaciones en Optica A.C. and Centro de Investigacion en Matematicas A.C. in Mexico. The theory was presented. The objective of the work was to implement the algorithm into the software able to perform the off-line unwrapping on the fringe pattern. The algorithms are present as well as the result and also the software developed for the implementation.

  2. Fixed Point Implementations of Fast Kalman Algorithms.

    DTIC Science & Technology

    1983-11-01

    fined point multiply. ve &geete a meatn ’C.- nero. variance N random vector s~t) each time weAfilter is said to be 12 Scaled if udae 8(t+11t0 - 3-1* AS...nl.v by bl ’k rn.b.) 20 AST iA C T ’Cnnin to .- a , o. a ide It .,oco ea ry and Idenuty by block number) In this paper we study scaling rules and round...realized in a -fast form that uses the so-called fast Kalman gain algorithm. The algorithm for the gain is fixed point. Scaling rules and expressions for

  3. An Agent Inspired Reconfigurable Computing Implementation of a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Weir, John M.; Wells, B. Earl

    2003-01-01

    Many software systems have been successfully implemented using an agent paradigm which employs a number of independent entities that communicate with one another to achieve a common goal. The distributed nature of such a paradigm makes it an excellent candidate for use in high speed reconfigurable computing hardware environments such as those present in modem FPGA's. In this paper, a distributed genetic algorithm that can be applied to the agent based reconfigurable hardware model is introduced. The effectiveness of this new algorithm is evaluated by comparing the quality of the solutions found by the new algorithm with those found by traditional genetic algorithms. The performance of a reconfigurable hardware implementation of the new algorithm on an FPGA is compared to traditional single processor implementations.

  4. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  5. PURGATORIO—a new implementation of the INFERNO algorithm

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Sonnad, V.; Sterne, P.; Isaacs, W.

    2006-05-01

    An overview of PURGATORIO, a new implementation of the INFERNO [Liberman, Phys Rev B 1979;20:4981 9] equation of state model, is presented. The new algorithm emphasizes a novel subdivision scheme for automatically resolving the structure of the continuum density of states, circumventing limitations of the pseudo-R matrix algorithm previously utilized.

  6. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  7. Implementing Agglomerative Hierarchic Clustering Algorithms for Use in Document Retrieval.

    ERIC Educational Resources Information Center

    Voorhees, Ellen M.

    1986-01-01

    Describes a computerized information retrieval system that uses three agglomerative hierarchic clustering algorithms--single link, complete link, and group average link--and explains their implementations. It is noted that these implementations have been used to cluster a collection of 12,000 documents. (LRW)

  8. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  9. Rapid algorithm prototyping and implementation for power quality measurement

    NASA Astrophysics Data System (ADS)

    Kołek, Krzysztof; Piątek, Krzysztof

    2015-12-01

    This article presents a Model-Based Design (MBD) approach to rapidly implement power quality (PQ) metering algorithms. Power supply quality is a very important aspect of modern power systems and will become even more important in future smart grids. In this case, maintaining the PQ parameters at the desired level will require efficient implementation methods of the metering algorithms. Currently, the development of new, advanced PQ metering algorithms requires new hardware with adequate computational capability and time intensive, cost-ineffective manual implementations. An alternative, considered here, is an MBD approach. The MBD approach focuses on the modelling and validation of the model by simulation, which is well-supported by a Computer-Aided Engineering (CAE) packages. This paper presents two algorithms utilized in modern PQ meters: a phase-locked loop based on an Enhanced Phase Locked Loop (EPLL), and the flicker measurement according to the IEC 61000-4-15 standard. The algorithms were chosen because of their complexity and non-trivial development. They were first modelled in the MATLAB/Simulink package, then tested and validated in a simulation environment. The models, in the form of Simulink diagrams, were next used to automatically generate C code. The code was compiled and executed in real-time on the Zynq Xilinx platform that combines a reconfigurable Field Programmable Gate Array (FPGA) with a dual-core processor. The MBD development of PQ algorithms, automatic code generation, and compilation form a rapid algorithm prototyping and implementation path for PQ measurements. The main advantage of this approach is the ability to focus on the design, validation, and testing stages while skipping over implementation issues. The code generation process renders production-ready code that can be easily used on the target hardware. This is especially important when standards for PQ measurement are in constant development, and the PQ issues in emerging smart

  10. Testing of hardware implementation of infrared image enhancing algorithm

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  11. Outline of a fast hardware implementation of Winograd's DFT algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  12. Highly parallel consistent labeling algorithm suitable for optoelectronic implementation.

    PubMed

    Marsden, G C; Kiamilev, F; Esener, S; Lee, S H

    1991-01-10

    Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.

  13. Study of hardware implementations of fast tracking algorithms

    NASA Astrophysics Data System (ADS)

    Song, Z.; De Lentdecker, G.; Dong, J.; Huang, G.; Léonard, A.; Robert, F.; Wang, D.; Yang, Y.

    2017-02-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  14. Some Computer Algorithms to Implement a Reliability Shorthand.

    DTIC Science & Technology

    1982-10-01

    AD-A123 781 SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIAILITY /I I SHORTHAND(U) N VAL POSTGRADUATE SCHOOL MONTEREY CA UNCLASSIFIED SGREOC82F/G 12...California THESIS SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIABILITY SHORTHAND Sadan Gursel October 1982 JAN 26I󈨗 A :: Thesis Advisor: J. D. Esary...DOCMEWTATION PAGE ISSFORK COMPLZT’Nc FORM .REPORTNMU1EUGW CKO N.3 19IiNI CATALOG mao d. TMTE (od Sid"Ifte) $. ?’V9E OF 1119000 & PEUoOŔ COVERED Some Computer

  15. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  16. A novel pipeline based FPGA implementation of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2014-05-01

    To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.

  17. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  18. Efficient Implementation of Nested-Loop Multimedia Algorithms

    NASA Astrophysics Data System (ADS)

    Kittitornkun, Surin; Hu, Yu Hen

    2001-12-01

    A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling) methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC), field programmable gate array (FPGA), and a programmable clustered VLIW processor.

  19. Efficient implementations of hyperspectral chemical-detection algorithms

    NASA Astrophysics Data System (ADS)

    Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.

    2013-10-01

    Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.

  20. Implementation of pattern recognition algorithm based on RBF neural network

    NASA Astrophysics Data System (ADS)

    Bouchoux, Sophie; Brost, Vincent; Yang, Fan; Grapin, Jean Claude; Paindavoine, Michel

    2002-12-01

    In this paper, we present implementations of a pattern recognition algorithm which uses a RBF (Radial Basis Function) neural network. Our aim is to elaborate a quite efficient system which realizes real time faces tracking and identity verification in natural video sequences. Hardware implementations have been realized on an embedded system developed by our laboratory. This system is based on a DSP (Digital Signal Processor) TMS320C6x. The optimization of implementations allow us to obtain a processing speed of 4.8 images (240x320 pixels) per second with a correct rate of 95% of faces tracking and identity verification.

  1. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  2. FPGA implementation of digital down converter using CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    Agarwal, Ashok; Lakshmi, Boppana

    2013-01-01

    In radio receivers, Digital Down Converters (DDC) are used to translate the signal from Intermediate Frequency level to baseband. It also decimates the oversampled signal to a lower sample rate, eliminating the need of a high end digital signal processors. In this paper we have implemented architecture for DDC employing CORDIC algorithm, which down converts an IF signal of 70MHz (3G) to 200 KHz baseband GSM signal, with an SFDR greater than 100dB. The implemented architecture reduces the hardware resource requirements by 15 percent when compared with other architecture available in the literature due to elimination of explicit multipliers and a quadrature phase shifter for mixing.

  3. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  4. Implementing wide baseline matching algorithms on a graphics processing unit.

    SciTech Connect

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  5. Control algorithm implementation for a redundant degree of freedom manipulator

    NASA Technical Reports Server (NTRS)

    Cohan, Steve

    1991-01-01

    This project's purpose is to develop and implement control algorithms for a kinematically redundant robotic manipulator. The manipulator is being developed concurrently by Odetics Inc., under internal research and development funding. This SBIR contract supports algorithm conception, development, and simulation, as well as software implementation and integration with the manipulator hardware. The Odetics Dexterous Manipulator is a lightweight, high strength, modular manipulator being developed for space and commercial applications. It has seven fully active degrees of freedom, is electrically powered, and is fully operational in 1 G. The manipulator consists of five self-contained modules. These modules join via simple quick-disconnect couplings and self-mating connectors which allow rapid assembly/disassembly for reconfiguration, transport, or servicing. Each joint incorporates a unique drive train design which provides zero backlash operation, is insensitive to wear, and is single fault tolerant to motor or servo amplifier failure. The sensing system is also designed to be single fault tolerant. Although the initial prototype is not space qualified, the design is well-suited to meeting space qualification requirements. The control algorithm design approach is to develop a hierarchical system with well defined access and interfaces at each level. The high level endpoint/configuration control algorithm transforms manipulator endpoint position/orientation commands to joint angle commands, providing task space motion. At the same time, the kinematic redundancy is resolved by controlling the configuration (pose) of the manipulator, using several different optimizing criteria. The center level of the hierarchy servos the joints to their commanded trajectories using both linear feedback and model-based nonlinear control techniques. The lowest control level uses sensed joint torque to close torque servo loops, with the goal of improving the manipulator dynamic behavior

  6. Multiplatform GPGPU implementation of the active contours without edges algorithm

    NASA Astrophysics Data System (ADS)

    Zavala-Romero, Olmo; Meyer-Baese, Anke; Meyer-Baese, Uwe

    2012-05-01

    An OpenCL implementation of the Active Contours Without Edges algorithm is presented. The proposed algorithm uses the General Purpose Computing on Graphics Processing Units (GPGPU) to accelerate the original model by parallelizing the two main steps of the segmentation process, the computation of the Signed Distance Function (SDF) and the evolution of the segmented curve. The proposed scheme for the computation of the SDF is based on the iterative construction of partial Voronoi diagrams of a reduced dimension and obtains the exact Euclidean distance in a time of order O(N/p), where N is the number of pixels and p the number of processors. With high resolution images the segmentation algorithm runs 10 times faster than its equivalent sequential implementation. This work is being done as an open source software that, being programmed in OpenCL, can be used in dierent platforms allowing a broad number of nal users and can be applied in dierent areas of computer vision, like medical imaging, tracking, robotics, etc. This work uses OpenGL to visualize the algorithm results in real time.

  7. Decoding the brain's algorithm for categorization from its neural implementation.

    PubMed

    Mack, Michael L; Preston, Alison R; Love, Bradley C

    2013-10-21

    Acts of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity [1]. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level [2-4]. Despite the great potential for theoretical advances, few studies of cognition bridge levels of analysis. For example, formal cognitive models of category decisions accurately predict human decision making [5, 6], but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain [7-9]. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations that support category decisions to evaluate two dominant, and opposing, models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar [5] rather than prototype theory [10, 11]. Representations of individual experiences, not the abstraction of experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition.

  8. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  9. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  10. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  11. A Modified ART 1 Algorithm more Suitable for VLSI Implementations.

    PubMed

    Linares-Barranco, Bernabe; Serrano-Gotarredona, Teresa

    1996-08-01

    This paper presents a modification to the original ART 1 algorithm ([Carpenter and Grossberg, 1987a], A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics, and Image Processing, 37, 54-115) that is conceptually similar, can be implemented in hardware with less sophisticated building blocks, and maintains the computational capabilities of the originally proposed algorithm. This modified ART 1 algorithm (which we will call here ART 1(m)) is the result of hardware motivated simplifications investigated during the design of an actual ART 1 chip [Serrano-Gotarredona et al., 1994, Proc. 1994 IEEE Int. Conf. Neural Networks (Vol. 3, pp. 1912-1916); [Serrano-Gotarredona and Linares-Barranco, 1996], IEEE Trans. VLSI Systems, (in press)]. The purpose of this paper is simply to justify theoretically that the modified algorithm preserves the computational properties of the original one and to study the difference in behavior between the two approaches. Copyright 1996 Elsevier Science Ltd.

  12. Developing and Implementing the Data Mining Algorithms in RAVEN

    SciTech Connect

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  13. Infrared Jitter Imaging Data Reduction: Algorithms and Implementation

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    Jitter imaging (also known as microscanning) is probably one of the most efficient ways to perform astronomical observations in the infrared. It requires very efficient filtering and recentering methods to produce the best possible output from raw data. This paper discusses issues attached to Poisson offset generation, efficient infrared sky filtering, offset recovery between planes through cross-correlation and/or point pattern recognition techniques, plane shifting with subpixel resolution through various kernel-based interpolation schemes, and 3D filtering for plane accumulation. Several algorithms are described for each step, having in mind an automatic data processing in pipeline mode (i.e., without user interaction) as intended for the Very Large Telescope. Implementation of these algorithms in optimized ANSI C (the eclipse library) is also described here.

  14. Neural network implementations of data association algorithms for sensor fusion

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  15. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  16. A comparative analysis of GPU implementations of spectral unmixing algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez, Sergio; Plaza, Antonio

    2011-11-01

    Spectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It involves the separation of a mixed pixel spectrum into its pure component spectra (called endmembers) and the estimation of the proportion (abundance) of each endmember in the pixel. Over the last years, several algorithms have been proposed for: i) automatic extraction of endmembers, and ii) estimation of the abundance of endmembers in each pixel of the hyperspectral image. The latter step usually imposes two constraints in abundance estimation: the non-negativity constraint (meaning that the estimated abundances cannot be negative) and the sum-toone constraint (meaning that the sum of endmember fractional abundances for a given pixel must be unity). These two steps comprise a hyperspectral unmixing chain, which can be very time-consuming (particularly for high-dimensional hyperspectral images). Parallel computing architectures have offered an attractive solution for fast unmixing of hyperspectral data sets, but these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time. In this paper, we perform an inter-comparison of parallel algorithms for automatic extraction of pure spectral signatures or endmembers and for estimation of the abundance of endmembers in each pixel of the scene. The compared techniques are implemented in graphics processing units (GPUs). These hardware accelerators can bridge the gap towards on-board processing of this kind of data. The considered algorithms comprise the orthogonal subspace projection (OSP), iterative error analysis (IEA) and N-FINDR algorithms for endmember extraction, as well as unconstrained, partially constrained and fully constrained abundance estimation. The considered implementations are inter-compared using different GPU architectures and hyperspectral

  17. DSP Implementation of the Multiscale Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/ spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  18. DSP Implementation of the Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  19. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    PubMed

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks.

  20. Purgatorio - A new implementation of the Inferno algorithm

    SciTech Connect

    Wilson, B; Sonnad, V; Sterne, P; Isaacs, W

    2005-03-29

    For astrophysical applications, as well as modeling laser-produced plasmas, there is a continual need for equation-of-state data over a wide domain of physical conditions. This paper presents algorithmic aspects for computing the Helmholtz free energy of plasma electrons for temperatures spanning from a few Kelvin to several KeV, and densities ranging from essentially isolated ion conditions to such large compressions that most bound orbitals become delocalized. The objective is high precision results in order to compute pressure and other thermodynamic quantities by numerical differentiation. This approach has the advantage that internal thermodynamic self-consistency is ensured, regardless of the specific physical model, but at the cost of very stringent numerical tolerances for each operation. The computational aspects we address in this paper are faced by any model that relies on input from the quantum mechanical spectrum of a spherically symmetric Hamiltonian operator. The particular physical model we employ is that of INFERNO; of a spherically averaged ion embedded in jellium. An overview of PURGATORIO, a new implementation of the INFERNO equation of state model, is presented. The new algorithm emphasizes a novel decimation scheme for automatically resolving the structure of the continuum density of states, circumventing limitations of the pseudo-R matrix algorithm previously utilized.

  1. Cascade Error Projection: A Learning Algorithm for Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1996-01-01

    In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

  2. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  3. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  4. FPGA implementation of Generalized Hebbian Algorithm for texture classification.

    PubMed

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs.

  5. The implementation of Grover's algorithm in optically driven quantum dots

    NASA Astrophysics Data System (ADS)

    Yin, W.; Liang, J. Q.; Yan, Q. W.

    2006-11-01

    In this paper, we study the implementation of Grover's algorithm using the system of three identical quantum dots (QDs) coupled by a multi-frequency optical field. Our result shows that increasing the electric field strength A speeds up the oscillations of the occupations of the excited states rather than increasing the occupation probabilities of those states. The larger the detuning of the field from resonance, the fewer the states which can be used as qubits. Compared with a multi-frequency external field, a single-frequency external field will generate much lower amplitudes of the excited states under the same coupling strength A and interdot Coulomb interaction V. However, when the three quantum dots are coupled with a single-frequency external field, these amplitudes increase on increasing the coupling strength A or decreasing the interdot Coulomb interaction V.

  6. An overview of SuperLU: Algorithms, implementation, and userinterface

    SciTech Connect

    Li, Xiaoye S.

    2003-09-30

    We give an overview of the algorithms, design philosophy,and implementation techniques in the software SuperLU, for solving sparseunsymmetric linear systems. In particular, we highlight the differencesbetween the sequential SuperLU (including its multithreaded extension)and parallel SuperLU_DIST. These include the numerical pivoting strategy,the ordering strategy for preserving sparsity, the ordering in which theupdating tasks are performed, the numerical kernel, and theparallelization strategy. Because of the scalability concern, theparallel code is drastically different from the sequential one. Wedescribe the user interfaces ofthe libraries, and illustrate how to usethe libraries most efficiently depending on some matrix characteristics.Finally, we give some examples of how the solver has been used inlarge-scale scientific applications, and the performance.

  7. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  8. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  9. An implementation of continuous genetic algorithm in parameter estimation of predator-prey model

    NASA Astrophysics Data System (ADS)

    Windarto

    2016-03-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.

  10. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  11. Performance of new GPU-based scan-conversion algorithm implemented using OpenGL.

    PubMed

    Steelman, William A; Richard, William D

    2011-04-01

    A new GPU-based scan-conversion algorithm implemented using OpenGL is described. The compute performance of this new algorithm running on a modem GPU is compared to the performance of three common scan-conversion algorithms (nearest-neighbor, linear interpolation and bilinear interpolation) implemented in software using a modem CPU. The quality of the images produced by the algorithm, as measured by signal-to-noise power, is also compared to the quality of the images produced using these three common scan-conversion algorithms.

  12. Hardware Implementation of a Lossless Image Compression Algorithm Using a Field Programmable Gate Array

    NASA Astrophysics Data System (ADS)

    Klimesh, M.; Stanton, V.; Watola, D.

    2000-10-01

    We describe a hardware implementation of a state-of-the-art lossless image compression algorithm. The algorithm is based on the LOCO-I (low complexity lossless compression for images) algorithm developed by Weinberger, Seroussi, and Sapiro, with modifications to lower the implementation complexity. In this setup, the compression itself is performed entirely in hardware using a field programmable gate array and a small amount of random access memory. The compression speed achieved is 1.33 Mpixels/second. Our algorithm yields about 15 percent better compression than the Rice algorithm.

  13. Parallel implementation of the FETI-DPEM algorithm for general 3D EM simulations

    NASA Astrophysics Data System (ADS)

    Li, Yu-Jia; Jin, Jian-Ming

    2009-05-01

    A parallel implementation of the electromagnetic dual-primal finite element tearing and interconnecting algorithm (FETI-DPEM) is designed for general three-dimensional (3D) electromagnetic large-scale simulations. As a domain decomposition implementation of the finite element method, the FETI-DPEM algorithm provides fully decoupled subdomain problems and an excellent numerical scalability, and thus is well suited for parallel computation. The parallel implementation of the FETI-DPEM algorithm on a distributed-memory system using the message passing interface (MPI) is discussed in detail along with a few practical guidelines obtained from numerical experiments. Numerical examples are provided to demonstrate the efficiency of the parallel implementation.

  14. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  15. Efficient GPU implementation for Particle in Cell algorithm

    SciTech Connect

    Joseph, Rejith George; Ravunnikutty, Girish; Ranka, Sanjay; Klasky, Scott A

    2011-01-01

    Particle in cell method is widely used method in the plasma physics to study the trajectories of charged particles under electromagnetic fields. The PIC algorithm is computationally intensive and its time requirements are proportional to the number of charged particles involved in the simulation. The focus of the paper is to parallelize the PIC algorithm on Graphics Processing Unit (GPU). We present several performance tradeoffs related to the small shared memory and atomic operations on the GPU to achieve high performance.

  16. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  17. Implementation and comparison of reconstruction algorithms for two-dimensional optoacoustic tomography using a linear array

    NASA Astrophysics Data System (ADS)

    Modgil, Dimple; La Rivière, Patrick J.

    2009-07-01

    Our goal is to compare and contrast various image reconstruction algorithms for optoacoustic tomography (OAT) assuming a finite linear aperture of the kind that arises when using a linear-array transducer. Because such transducers generally have tall, narrow elements, they are essentially insensitive to out-of-plane acoustic waves, and the usually 3-D OAT problem reduces to a 2-D problem. Algorithms developed for the 3-D problem may not perform optimally in 2-D. We have implemented and evaluated a number of previously described OAT algorithms, including an exact (in 3-D) Fourier-based algorithm and a synthetic-aperture-based algorithm. We have also implemented a 2-D algorithm developed by Norton for reflection mode tomography that has not, to the best of our knowledge, been applied to OAT before. Our simulation studies of resolution, contrast, noise properties, and signal detectability measures suggest that Norton's approach-based algorithm has the best contrast, resolution, and signal detectability.

  18. PDoublePop: An implementation of parallel genetic algorithm for function optimization

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Tzallas, Alexandros; Tsalikakis, Dimitris

    2016-12-01

    A software for the implementation of parallel genetic algorithms is presented in this article. The underlying genetic algorithm is aimed to locate the global minimum of a multidimensional function inside a rectangular hyperbox. The proposed software named PDoublePop implements a client-server model for parallel genetic algorithms with advanced features for the local genetic algorithms such as: an enhanced stopping rule, an advanced mutation scheme and periodical application of a local search procedure. The user may code the objective function either in C++ or in Fortran77. The method is tested on a series of well-known test functions and the results are reported.

  19. Endgame implementations for the Efficient Global Optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Kaanta, Bryan

    2009-05-01

    Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency (fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We also report results using a different cost function than the one previously used [2,3].

  20. Algorithm and implementation of GPS/VRS network RTK

    NASA Astrophysics Data System (ADS)

    Gao, Chengfa; Yuan, Benyin; Ke, Fuyang; Pan, Shuguo

    2009-06-01

    This paper presents a virtual reference station method and its application. Details of how to generate GPS virtual phase observation are discussed in depth. The developed algorithms are successfully applied to the independent development network digital land investigation system. Experiments are carried out to investigate the system's performance whose results show that the algorithms have good availability and stability. The resulted accuracy of the VRS/RTK positioning was found to be within +/-3.3cm in the horizontal component and +/-7.9cm in the vertical component, which meets the requirements of precise digital land investigation.

  1. The design and implementation of MPI master-slave parallel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Cheng, Yanliu

    2013-03-01

    In this paper, the MPI master-slave parallel genetic algorithm is implemented by analyzing the basic genetic algorithm and parallel MPI program, and building a Linux cluster. This algorithm is used for the test of maximum value problems (Rosen brocks function) .And we acquire the factors influencing the master-slave parallel genetic algorithm by deriving from the analysis of test data. The experimental data shows that the balanced hardware configuration and software design optimization can improve the performance of system in the complexity of the computing environment using the master-slave parallel genetic algorithms.

  2. An efficient and high performance linear recursive variable expansion implementation of the Smith-Waterman algorithm.

    PubMed

    Hasan, Laiq; Al-Ars, Zaid

    2009-01-01

    In this paper, we present an efficient and high performance linear recursive variable expansion (RVE) implementation of the Smith-Waterman (S-W) algorithm and compare it with a traditional linear systolic array implementation. The results demonstrate that the linear RVE implementation performs up to 2.33 times better than the traditional linear systolic array implementation, at the cost of utilizing 2 times more resources.

  3. Implementation of an efficient labeling algorithm on a pipelined architecture

    NASA Astrophysics Data System (ADS)

    Olsson, Olof J.; Penman, David W.

    1992-11-01

    This paper describes an efficient approach, developed by the authors, for labelling images using a combination of pipeline (Datacube) and host (general purpose computer) processing. The output of the algorithm is a coordinate list of labelled object pixels that facilitates further high level operations.

  4. A block-wise approximate parallel implementation for ART algorithm on CUDA-enabled GPU.

    PubMed

    Fan, Zhongyin; Xie, Yaoqin

    2015-01-01

    Computed tomography (CT) has been widely used to acquire volumetric anatomical information in the diagnosis and treatment of illnesses in many clinics. However, the ART algorithm for reconstruction from under-sampled and noisy projection is still time-consuming. It is the goal of our work to improve a block-wise approximate parallel implementation for the ART algorithm on CUDA-enabled GPU to make the ART algorithm applicable to the clinical environment. The resulting method has several compelling features: (1) the rays are allotted into blocks, making the rays in the same block parallel; (2) GPU implementation caters to the actual industrial and medical application demand. We test the algorithm on a digital shepp-logan phantom, and the results indicate that our method is more efficient than the existing CPU implementation. The high computation efficiency achieved in our algorithm makes it possible for clinicians to obtain real-time 3D images.

  5. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.

  6. Implementations of back propagation algorithm in ecosystems applications

    NASA Astrophysics Data System (ADS)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert

  7. Holographic implementation of a learning machine based on a multicategory perceptron algorithm.

    PubMed

    Paek, E G; Wullert Ii, J R; Patel, J S

    1989-12-01

    An optical learning machine that has multicategory classification capability is demonstrated. The system exactly implements the single-layer perceptron algorithm and is fully parallel and analog. Experimental results on the learning by examples obtained from the system are described.

  8. Comparative study of fusion algorithms and implementation of new efficient solution

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Snoussi, Hichem; Siala, Mohamed; Abdelkefi, Fatma

    2014-05-01

    High Dynamic Range (HDR) imaging has been the subject of significant researches over the past years, the goal of acquiring the best cinema-quality HDR images of fast-moving scenes using an efficient merging algorithm has not yet been achieved. In fact, through the years, many efficient algorithms have been implemented and developed. However, they are not yet efficient since they don't treat all the situations and they have not enough speed to ensure fast HDR image reconstitution. In this paper, we will present a full comparative analyze and study of the available fusion algorithms. Also, we will implement our personal algorithm which can be more optimized and faster than the existed ones. We will also present our investigated algorithm that has the advantage to be more optimized than the existing ones. This merging algorithm is related to our hardware solution allowing us to obtain four pictures with different exposures.

  9. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  10. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  11. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  12. Implementing the Continued Fraction Algorithm on the Illiac IV.

    DTIC Science & Technology

    1980-01-01

    the Research 1. "On Computing Unitary Aliquot Sequences", with R.K. Guy, Procedings of the tenth Manitoba Conference on Numerical Mathematics , 1979...ADA100 145 NORTHERN ILLINOIS UNIV DE KALB DEPT OF MATHEMATICAL -ETC F/G 9/2 IMPLE1ENTING THE CONTINUED FRACTION ALGORITHM ON TIE ILLIAC IV.(U) 1980...Wunderlich F49620-79-C-O199 (, ) Mathematical Sciences Dept Northern ILlinois University _ 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM

  13. A Novel Implementation of Efficient Algorithms for Quantum Circuit Synthesis

    NASA Astrophysics Data System (ADS)

    Zeller, Luke

    In this project, we design and develop a computer program to effectively approximate arbitrary quantum gates using the discrete set of Clifford Gates together with the T gate (π/8 gate). Employing recent results from Mosca et. al. and Giles and Selinger, we implement a decomposition scheme that outputs a sequence of Clifford, T, and Tt gates that approximate the input to within a specified error range ɛ. Specifically, the given gate is first rounded to an element of Z[1/2, i] with a precision determined by ɛ, and then exact synthesis is employed to produce the resulting gate. It is known that this procedure is optimal in approximating an arbitrary single qubit gate. Our program, written in Matlab and Python, can complete both approximate and exact synthesis of qubits. It can be used to assist in the experimental implementation of an arbitrary fault-tolerant single qubit gate, for which direct implementation isn't feasible.

  14. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-04

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  15. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  16. Design and Implementation of VLSI Prime Factor Algorithm Processor.

    DTIC Science & Technology

    1987-12-01

    for A, 1i ’ 1 1 Fh Ai ,,r equjAtIInI. art, ( , 4 10 4,t’ 4 ( - /’ cr tht, (-arr% sur ma% akL, be represented as Figure 36 Carry Select Adder Blocking... Select Adder Blocking .......................................................... 81 Figure 37: ALU Adder Cell...ALU Logic Implementation............................................................ 81 viii J,.. in List of Figures (continued) Figure 36: Carry

  17. High-performance spectral element algorithms and implementations.

    SciTech Connect

    Fischer, P. F.; Tufo, H. M.

    1999-08-28

    We describe the development and implementation of a spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. Parallel performance is present on up to 2048 nodes of the Intel ASCI-Red machine at Sandia.

  18. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  19. A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation

    SciTech Connect

    sun, yipeng

    2012-05-03

    In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.

  20. Design methodology for optimal hardware implementation of wavelet transform domain algorithms

    NASA Astrophysics Data System (ADS)

    Johnson-Bey, Charles; Mickens, Lisa P.

    2005-05-01

    The work presented in this paper lays the foundation for the development of an end-to-end system design methodology for implementing wavelet domain image/video processing algorithms in hardware using Xilinx field programmable gate arrays (FPGAs). With the integration of the Xilinx System Generator toolbox, this methodology will allow algorithm developers to design and implement their code using the familiar MATLAB/Simulink development environment. By using this methodology, algorithm developers will not be required to become proficient in the intricacies of hardware design, thus reducing the design cycle and time-to-market.

  1. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  2. [Osteoarthrosis: implementation of current diagnostic and therapeutic algorithms].

    PubMed

    Meza-Reyes, Gilberto; Aldrete-Velasco, Jorge; Espinosa-Morales, Rolando; Torres-Roldán, Fernando; Díaz-Borjón, Alejandro; Robles-San Román, Manuel

    2017-01-01

    In the modern world, among the different clinical presentations of osteoarthritis, gonarthrosis and coxarthrosis exhibit the highest prevalence. In this paper, the characteristics of osteoarthritis and the different scales of assessment and classification of this pathology are exposed, to provide an exhibition of current evidence generated around diagnostic algorithms and treatment of osteoarthritis, with emphasis set out in the knee and hip, as these are the most frequent; a rational procedure for monitoring patients with osteoarthritis based on characteristic symptoms and the severity of the condition is also set. Finally, reference is made to the therapeutic benefits of the recent introduction of viscosupplementation with Hylan GF-20.

  3. Implementation and evaluation of ILLIAC 4 algorithms for multispectral image processing

    NASA Technical Reports Server (NTRS)

    Swain, P. H.

    1974-01-01

    Data concerning a multidisciplinary and multi-organizational effort to implement multispectral data analysis algorithms on a revolutionary computer, the Illiac 4, are reported. The effectiveness and efficiency of implementing the digital multispectral data analysis techniques for producing useful land use classifications from satellite collected data were demonstrated.

  4. Implementation of an institution-wide acute stroke algorithm: Improving stroke quality metrics

    PubMed Central

    Zuckerman, Scott L.; Magarik, Jordan A.; Espaillat, Kiersten B.; Kumar, Nishant Ganesh; Bhatia, Ritwik; Dewan, Michael C.; Morone, Peter J.; Hermann, Lisa D.; O’Duffy, Anne E.; Riebau, Derek A.; Kirshner, Howard S.; Mocco, J.

    2016-01-01

    Background: In May 2012, an updated stroke algorithm was implemented at Vanderbilt University Medical Center. The current study objectives were to: (1) describe the process of implementing a new stroke algorithm and (2) compare pre- and post-algorithm quality improvement (QI) metrics, specificaly door to computed tomography time (DTCT), door to neurology time (DTN), and door to tPA administration time (DTT). Methods: Our institutional stroke algorithm underwent extensive revision, with a focus on removing variability, streamlining care, and improving time delays. The updated stroke algorithm was implemented in May 2012. Three primary stroke QI metrics were evaluated over four separate 3-month time points, one pre- and three post-algorithm periods. Results: The following data points improved after algorithm implementation: average DTCT decreased from 39.9 to 12.8 min (P < 0.001); average DTN decreased from 34.1 to 8.2 min (P ≤ 0.001), and average DTT decreased from 62.5 to 43.5 min (P = 0.17). Conclusion: A new stroke protocol that prioritized neurointervention at our institution resulted in significant lowering in the DTCT and DTN, with a nonsignificant improvement in DTT. PMID:28144480

  5. Implementation of the SITAN algorithm in the digital terrain management and display system

    SciTech Connect

    Cambron, T.M.; Snyder, F.B.; Fellerhoff, J.R.

    1985-01-01

    This paper describes the functional methodologies and development processes used to integrate the SITAN autonomous navigation algorithm with the Digital Terrain Management and Display System (DTMDS) for real-time demonstrations aboard the AFTI/F-16 aircraft. Heretofore, the SITAN algorithm has not been implemented for real-time operation aboard a military aircraft. The paper describes the implementation design of the DTMDS and how the elevation data base supported by the digital map generator subsystem is made available to the SITAN algorithm. The effects of aircraft motion as related to SITAN algorithm timing and processor loading are evaluated. The closed-loop implementation with the AFTI aircraft inertial navigation system (INS) was specifically selected for the initial demonstration.

  6. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  7. Design and FPGA implementation of real-time automatic image enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin

    2016-11-01

    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  8. On Distribution Reduction and Algorithm Implementation in Inconsistent Ordered Information Systems

    PubMed Central

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems. PMID:25258721

  9. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    PubMed

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  10. Implementation of a partitioned algorithm for simulation of large CSI problems

    NASA Technical Reports Server (NTRS)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  11. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  12. Implementation on a nonlinear concrete cracking algorithm in NASTRAN

    NASA Technical Reports Server (NTRS)

    Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.

    1976-01-01

    A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.

  13. Implementing embedded artificial intelligence rules within algorithmic programming languages

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1988-01-01

    Most integrations of artificial intelligence (AI) capabilities with non-AI (usually FORTRAN-based) application programs require the latter to execute separately to run as a subprogram or, at best, as a coroutine, of the AI system. In many cases, this organization is unacceptable; instead, the requirement is for an AI facility that runs in embedded mode; i.e., is called as subprogram by the application program. The design and implementation of a Prolog-based AI capability that can be invoked in embedded mode are described. The significance of this system is twofold: Provision of Prolog-based symbol-manipulation and deduction facilities makes a powerful symbolic reasoning mechanism available to applications programs written in non-AI languages. The power of the deductive and non-procedural descriptive capabilities of Prolog, which allow the user to describe the problem to be solved, rather than the solution, is to a large extent vitiated by the absence of the standard control structures provided by other languages. Embedding invocations of Prolog rule bases in programs written in non-AI languages makes it possible to put Prolog calls inside DO loops and similar control constructs. The resulting merger of non-AI and AI languages thus results in a symbiotic system in which the advantages of both programming systems are retained, and their deficiencies largely remedied.

  14. Clinical implementation of a neonatal seizure detection algorithm.

    PubMed

    Temko, Andriy; Marnane, William; Boylan, Geraldine; Lightbody, Gordon

    2015-02-01

    Technologies for automated detection of neonatal seizures are gradually moving towards cot-side implementation. The aim of this paper is to present different ways to visualize the output of a neonatal seizure detection system and analyse their influence on performance in a clinical environment. Three different ways to visualize the detector output are considered: a binary output, a probabilistic trace, and a spatio-temporal colormap of seizure observability. As an alternative to visual aids, audified neonatal EEG is also considered. Additionally, a survey on the usefulness and accuracy of the presented methods has been performed among clinical personnel. The main advantages and disadvantages of the presented methods are discussed. The connection between information visualization and different methods to compute conventional metrics is established. The results of the visualization methods along with the system validation results indicate that the developed neonatal seizure detector with its current level of performance would unambiguously be of benefit to clinicians as a decision support system. The results of the survey suggest that a suitable way to visualize the output of neonatal seizure detection systems in a clinical environment is a combination of a binary output and a probabilistic trace. The main healthcare benefits of the tool are outlined. The decision support system with the chosen visualization interface is currently undergoing pre-market European multi-centre clinical investigation to support its regulatory approval and clinical adoption.

  15. Quantum computation: algorithms and implementation in quantum dot devices

    NASA Astrophysics Data System (ADS)

    Gamble, John King

    In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques

  16. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1990-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.

  17. Implementation of (omega)-k synthetic aperture radar imaging algorithm on a massively parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Yerkes, Christopher R.; Webster, Eric D.

    1994-06-01

    Advanced algorithms for synthetic aperture radar (SAR) imaging have in the past required computing capabilities only available from high performance special purpose hardware. Such architectures have tended to have short life cycles with respect to development expense. Current generation Massively Parallel Processors (MPP) are offering high performance capabilities necessary for such applications with both a scalable architecture and a longer projected life cycle. In this paper we explore issues associated with implementation of a SAR imaging algorithm on a mesh configured MPP architecture.

  18. Implementation of Lamarckian concepts in a Genetic Algorithm for structure solution from powder diffraction data

    NASA Astrophysics Data System (ADS)

    Turner, Giles W.; Tedesco, Emilio; Harris, Kenneth D. M.; Johnston, Roy L.; Kariuki, Benson M.

    2000-04-01

    Previous implementations of Genetic Algorithms in direct-space strategies for structure solution from powder diffraction data have employed the operations of mating, mutation and natural selection, with the fitness of each structure based on comparison between calculated and experimental powder diffraction patterns (we define fitness as a function of weighted-profile R-factor Rwp). We report an extension to this method, in which each structure generated in the Genetic Algorithm is subjected to local minimization of Rwp with respect to structural variables. This approach represents an implementation of Lamarckian concepts of evolution, and is found to give significant improvements in efficiency and reliability.

  19. Implementation of a robust 2400 b/s LPC algorithm for operation in noisy environments

    NASA Astrophysics Data System (ADS)

    Singer, Elliot; Tierney, Joseph

    1987-04-01

    A detailed description of the implementation of a robust 2400 b/s LPC algorithm is presented. The algorithm was developed to improve vocoder performance in acoustically compromised environments. Improved robustness in noise is achieved by: (1) increasing the speech bandwidth to 5 kHz; (2) increasing the LPC model order to 12; and (3) doubling the analysis rate. Frame fill techniques are used to achieve the 2400 b/s data rate. The algorithm is embodied in the Advanced Linear Predictive Coding Microprocessor which was developed as a prototype voice processor for in-flight evaluation of narrowband voice communication in the JTIDS communication system.

  20. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  1. Experimental implementation of Hogg's algorithm on a three-quantum-bit NMR quantum computer

    NASA Astrophysics Data System (ADS)

    Peng, Xinhua; Zhu, Xiwen; Fang, Ximing; Feng, Mang; Liu, Maili; Gao, Kelin

    2002-04-01

    Using nuclear magnetic resonance (NMR) techniques with a three-qubit sample, we have experimentally implemented the highly structured algorithm for the satisfiability problem with one variable in each clause proposed by Hogg. A simplified temporal averaging procedure was employed to prepare the three-qubit pseudopure state. The algorithm was completed with only a single evaluation of the structure of the problem and the solutions were found theoretically with probability 100%, results that outperform both unstructured quantum and the best classical search algorithms. However, about 90% of the corresponding experimental fidelities can be attributed to the imperfections of manipulations.

  2. Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations.

    PubMed

    Mühlbacher, Thomas; Piringer, Harald; Gratzl, Samuel; Sedlmair, Michael; Streit, Marc

    2014-12-01

    An increasing number of interactive visualization tools stress the integration with computational software like MATLAB and R to access a variety of proven algorithms. In many cases, however, the algorithms are used as black boxes that run to completion in isolation which contradicts the needs of interactive data exploration. This paper structures, formalizes, and discusses possibilities to enable user involvement in ongoing computations. Based on a structured characterization of needs regarding intermediate feedback and control, the main contribution is a formalization and comparison of strategies for achieving user involvement for algorithms with different characteristics. In the context of integration, we describe considerations for implementing these strategies either as part of the visualization tool or as part of the algorithm, and we identify requirements and guidelines for the design of algorithmic APIs. To assess the practical applicability, we provide a survey of frequently used algorithm implementations within R regarding the fulfillment of these guidelines. While echoing previous calls for analysis modules which support data exploration more directly, we conclude that a range of pragmatic options for enabling user involvement in ongoing computations exists on both the visualization and algorithm side and should be used.

  3. Linear array implementation of the EM algorithm for PET image reconstruction

    SciTech Connect

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-08-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today`s single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE`s) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE`s executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE`s. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance.

  4. Implementation of a new segmentation algorithm using the Eye-RIS CMOS vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arena, Paolo; De Fiore, Sebastiano; Vagliasindi, Guido; Fortuna, Luigi; Arik, Sabri

    2009-05-01

    Segmentation is the process of representing a digital image into multiple meaningful regions. Since these applications require more computational power in real time applications, we have implemented a new segmentation algorithm using the capabilities of Eye-RIS Vision System to execute the algorithm in very short time. The segmentation algorithm is implemented mainly in three steps. In the first step, which is pre-processing step, the images are acquired and noise filtering through Gaussian function is performed. In the second step, Sobel operators based edge detection approach is implemented on the system. In the last step, morphologic and logic operations are used to segment the images as post processing. The experimental results performed for different images show the accuracy of the proposed segmentation algorithm. Visual inspection and timing analysis (7.83 ms, 127 frame/sec) prove that the proposed segmentation algorithm can be executed for real time video processing applications. Also, these results prove the capability of Eye-RIS Vision System for real time image processing applications

  5. Electromagnetic interactions GEneRalized (EIGER): algorithm abstraction and HPC implementation

    SciTech Connect

    Sharpe, R.M., LLNL

    1998-04-21

    Modern software development methods combined with key generalizations of standard computational algorithms enable the development of a new class of electromagnetic modeling tools. This paper describes current and anticipated capabilities of a frequency domain modeling code, EIGER, which has an extremely wide range of applicability. In addition, software implementation methods and high performance computing issues are discussed.

  6. Implementation of an Evidence-Based Seizure Algorithm in Intellectual Disability Nursing: A Pilot Study

    ERIC Educational Resources Information Center

    Auberry, Kathy; Cullen, Deborah

    2016-01-01

    Based on the results of the Surrogate Decision-Making Self Efficacy Scale (Lopez, 2009a), this study sought to determine whether nurses working in the field of intellectual disability (ID) experience increased confidence when they implemented the American Association of Neuroscience Nurses (AANN) Seizure Algorithm during telephone triage. The…

  7. Electromagnetic Interactions GEneRalized (EIGER): Algorithm abstraction and HPC implementation

    SciTech Connect

    Sharpe, R.M.; Grant, J.B.; Champagne, N.J.; Wilton, D.R.; Jackson, D.R.; Johnson, W.A.; Jorgensen, R.E.; Rockway, J.W.; Manry, C.W.

    1998-06-01

    Modern software development methods combined with key generalizations of standard computational algorithms enable the development of a new class of electromagnetic modeling tools. This paper describes current and anticipated capabilities of a frequency domain modeling code, EIGER, which has an extremely wide range of applicability. In addition, software implementation methods and high performance computing issues are discussed.

  8. Implementation and evaluation of the new wind algorithm in NASA's 50 MHz doppler radar wind profiler

    NASA Technical Reports Server (NTRS)

    Taylor, Gregory E.; Manobianco, John T.; Schumann, Robin S.; Wheeler, Mark M.; Yersavich, Ann M.

    1993-01-01

    The purpose of this report is to document the Applied Meteorology Unit's implementation and evaluation of the wind algorithm developed by Marshall Space Flight Center (MSFC) on the data analysis processor (DAP) of NASA's 50 MHz doppler radar wind profiler (DRWP). The report also includes a summary of the 50 MHz DRWP characteristics and performance and a proposed concept of operations for the DRWP.

  9. Searching the short-period variable stars with the photometric algorithm implemented in LUIZA framework

    NASA Astrophysics Data System (ADS)

    Obara, Lukasz; Żarnecki, Aleksander Filip

    2015-09-01

    Pi of the Sky is a system of wide field-of-view robotic telescopes, which search for short timescale astrophysical phenomena, especially for prompt optical GRB emission. The system was designed for autonomous operation, monitoring a large fraction of the sky with 12m-13m range and time resolution of the order of 1 - 100 seconds. LUIZA is a dedicated framework developed for efficient off-line processing of the Pi of the Sky data, implemented in C++. The photometric algorithm based on ASAS photometry was implemented in LUIZA and compared with the algorithm based on the pixel cluster reconstruction and simple aperture photometry algorithm. Optimized photometry algorithms were then applied to the sample of test images, which were modified to include different patterns of variability of the stars (training sample). Different statistical estimators are considered for developing the general variable star identification algorithm. The algorithm will then be used to search for short-period variable stars in the real data.

  10. Evaluation of mass spectral library search algorithms implemented in commercial software.

    PubMed

    Samokhin, Andrey; Sotnezova, Ksenia; Lashin, Vitaly; Revelsky, Igor

    2015-06-01

    Performance of several library search algorithms (against EI mass spectral databases) implemented in commercial software products ( acd/specdb, chemstation, gc/ms solution and ms search) was estimated. Test set contained 1000 mass spectra, which were randomly selected from NIST'08 (RepLib) mass spectral database. It was shown that composite (also known as identity) algorithm implemented in ms search (NIST) software gives statistically the best results: the correct compound occupied the first position in the list of possible candidates in 81% of cases; the correct compound was within the list of top ten candidates in 98% of cases. It was found that use of presearch option can lead to rejection of the correct answer from the list of possible candidates (therefore presearch option should not be used, if possible). Overall performance of library search algorithms was estimated using receiver operating characteristic curves.

  11. Towards the Implementation of an Autonomous Camera Algorithm on the da Vinci Platform.

    PubMed

    Eslamian, Shahab; Reisner, Luke A; King, Brady W; Pandya, Abhilash K

    2016-01-01

    Camera positioning is critical for all telerobotic surgical systems. Inadequate visualization of the remote site can lead to serious errors that can jeopardize the patient. An autonomous camera algorithm has been developed on a medical robot (da Vinci) simulator. It is found to be robust in key scenarios of operation. This system behaves with predictable and expected actions for the camera arm with respect to the tool positions. The implementation of this system is described herein. The simulation closely models the methodology needed to implement autonomous camera control in a real hardware system. The camera control algorithm follows three rules: (1) keep the view centered on the tools, (2) keep the zoom level optimized such that the tools never leave the field of view, and (3) avoid unnecessary movement of the camera that may distract/disorient the surgeon. Our future work will apply this algorithm to the real da Vinci hardware.

  12. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.

    PubMed

    Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J

    2013-03-04

    A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed

  13. A complete implementation of the conjugate gradient algorithm on a reconfigurable supercomputer

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Connor, Carolyn M; Boorman, Thomas M; Poole, Stephen W

    2008-01-01

    The conjugate gradient is a prominent iterative method for solving systems of sparse linear equations. Large-scale scientific applications often utilize a conjugate gradient solver at their computational core. In this paper we present a field programmable gate array (FPGA) based implementation of a double precision, non-preconditioned, conjugate gradient solver for fmite-element or finite-difference methods. OUf work utilizes the SRC Computers, Inc. MAPStation hardware platform along with the 'Carte' software programming environment to ease the programming workload when working with the hybrid (CPUIFPGA) environment. The implementation is designed to handle large sparse matrices of up to order N x N where N <= 116,394, with up to 7 non-zero, 64-bit elements per sparse row. This implementation utilizes an optimized sparse matrix-vector multiply operation which is critical for obtaining high performance. Direct parallel implementations of loop unrolling and loop fusion are utilized to extract performance from the various vector/matrix operations. Rather than utilize the FPGA devices as function off-load accelerators, our implementation uses the FPGAs to implement the core conjugate gradient algorithm. Measured run-time performance data is presented comparing the FPGA implementation to a software-only version showing that the FPGA can outperform processors running up to 30x the clock rate. In conclusion we take a look at the new SRC-7 system and estimate the performance of this algorithm on that architecture.

  14. Quantum algorithm for universal implementation of the projective measurement of energy.

    PubMed

    Nakayama, Shojun; Soeda, Akihito; Murao, Mio

    2015-05-15

    A projective measurement of energy (PME) on a quantum system is a quantum measurement determined by the Hamiltonian of the system. PME protocols exist when the Hamiltonian is given in advance. Unknown Hamiltonians can be identified by quantum tomography, but the time cost to achieve a given accuracy increases exponentially with the size of the quantum system. In this Letter, we improve the time cost by adapting quantum phase estimation, an algorithm designed for computational problems, to measurements on physical systems. We present a PME protocol without quantum tomography for Hamiltonians whose dimension and energy scale are given but which are otherwise unknown. Our protocol implements a PME to arbitrary accuracy without any dimension dependence on its time cost. We also show that another computational quantum algorithm may be used for efficient estimation of the energy scale. These algorithms show that computational quantum algorithms, with suitable modifications, have applications beyond their original context.

  15. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    SciTech Connect

    Li Yupeng Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  16. Quantum Algorithm for Universal Implementation of the Projective Measurement of Energy

    NASA Astrophysics Data System (ADS)

    Nakayama, Shojun; Soeda, Akihito; Murao, Mio

    2015-05-01

    A projective measurement of energy (PME) on a quantum system is a quantum measurement determined by the Hamiltonian of the system. PME protocols exist when the Hamiltonian is given in advance. Unknown Hamiltonians can be identified by quantum tomography, but the time cost to achieve a given accuracy increases exponentially with the size of the quantum system. In this Letter, we improve the time cost by adapting quantum phase estimation, an algorithm designed for computational problems, to measurements on physical systems. We present a PME protocol without quantum tomography for Hamiltonians whose dimension and energy scale are given but which are otherwise unknown. Our protocol implements a PME to arbitrary accuracy without any dimension dependence on its time cost. We also show that another computational quantum algorithm may be used for efficient estimation of the energy scale. These algorithms show that computational quantum algorithms, with suitable modifications, have applications beyond their original context.

  17. Design and Implementation of Broadcast Algorithms for Extreme-Scale Systems

    SciTech Connect

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua

    2011-01-01

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementation of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.

  18. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  19. Spectral implementation of some quantum algorithms by one- and two-dimensional nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Das, Ranabir; Kumar, Anil

    2004-10-01

    Quantum information processing has been effectively demonstrated on a small number of qubits by nuclear magnetic resonance. An important subroutine in any computing is the readout of the output. "Spectral implementation" originally suggested by Z. L. Madi, R. Bruschweiler, and R. R. Ernst [J. Chem. Phys. 109, 10603 (1999)], provides an elegant method of readout with the use of an extra "observer" qubit. At the end of computation, detection of the observer qubit provides the output via the multiplet structure of its spectrum. In spectral implementation by two-dimensional experiment the observer qubit retains the memory of input state during computation, thereby providing correlated information on input and output, in the same spectrum. Spectral implementation of Grover's search algorithm, approximate quantum counting, a modified version of Berstein-Vazirani problem, and Hogg's algorithm are demonstrated here in three- and four-qubit systems.

  20. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  1. A dual-processor multi-frequency implementation of the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Godiwala, Pankaj M.; Caglayan, Alper K.

    1987-01-01

    This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.

  2. The design and hardware implementation of a low-power real-time seizure detection algorithm.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Ward, Matthew P; Worth, Robert M; Roy, Kaushik; Irazoqui, Pedro P

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 +/- 0.02% and 88.9 +/- 0.01% (mean +/- SE(alpha = 0.05)), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  3. An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Kelmelis, Eric J.; Arce, Gonzalo R.

    2007-04-01

    According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.

  4. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  5. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  6. Co-design of software and hardware to implement remote sensing algorithms

    SciTech Connect

    Theiler, J. P.; Frigo, J.; Gokhale, M.; Szymanski, J. J.

    2001-01-01

    Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an 'inner loop' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.

  7. Image coding using parallel implementations of the embedded zerotree wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Creusere, Charles D.

    1996-03-01

    We explore here the implementation of Shapiro's embedded zerotree wavelet (EZW) image coding algorithms on an array of parallel processors. To this end, we first consider the problem of parallelizing the basic wavelet transform, discussing past work in this area and the compatibility of that work with the zerotree coding process. From this discussion, we present a parallel partitioning of the transform which is computationally efficient and which allows the wavelet coefficients to be coded with little or no additional inter-processor communication. The key to achieving low data dependence between the processors is to ensure that each processor contains only entire zerotrees of wavelet coefficients after the decomposition is complete. We next quantify the rate-distortion tradeoffs associated with different levels of parallelization for a few variations of the basic coding algorithm. Studying these results, we conclude that the quality of the coder decreases as the number of parallel processors used to implement it increases. Noting that the performance of the parallel algorithm might be unacceptably poor for large processor arrays, we also develop an alternate algorithm which always achieves the same rate-distortion performance as the original sequential EZW algorithm at the cost of higher complexity and reduced scalability.

  8. Edge detection algorithms implemented on Bi-i cellular vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arik, Sabri

    2009-02-01

    Bi-i (Bio-inspired) Cellular Vision system is built mainly on Cellular Neural /Nonlinear Networks (CNNs) type (ACE16k) and Digital Signal Processing (DSP) type microprocessors. CNN theory proposed by Chua has advanced properties for image processing applications. In this study, the edge detection algorithms are implemented on the Bi-i Cellular Vision System. Extracting the edge of an image to be processed correctly and fast is of crucial importance for image processing applications. Threshold Gradient based edge detection algorithm is implemented using ACE16k microprocessor. In addition, pre-processing operation is realized by using an image enhancement technique based on Laplacian operator. Finally, morphologic operations are performed as post processing operations. Sobel edge detection algorithm is performed by convolving sobel operators with the image in the DSP. The performances of the edge detection algorithms are compared using visual inspection and timing analysis. Experimental results show that the ACE16k has great computational power and Bi-i Cellular Vision System is very qualified to apply image processing algorithms in real time.

  9. Software Environment for the Implementation of Tomographic Reconstruction Algorithms Applied to Cases of Few Projections

    SciTech Connect

    Rios, A. B.; Valda, A.; Somacal, H.

    2007-10-26

    Usually tomographic procedure requires a set of projections around the object under study and a mathematical processing of such projections through reconstruction algorithms. An accurate reconstruction requires a proper number of projections (angular sampling) and a proper number of elements in each projection (linear sampling). However in several practical cases it is not possible to fulfill these conditions leading to the so-called problem of few projections. In this case, iterative reconstruction algorithms are more suitable than analytic ones. In this work we present a program written in C++ that provides an environment for two iterative algorithm implementations, one algebraic and the other statistical. The software allows the user a full definition of the acquisition and reconstruction geometries used for the reconstruction algorithms but also to perform projection and backprojection operations. A set of analysis tools was implemented for the characterization of the convergence process. We analyze the performance of the algorithms on numerical phantoms and present the reconstruction of experimental data with few projections coming from transmission X-ray and micro PIXE (Particle-Induced X-Ray Emission) images.

  10. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.

    PubMed

    Chen, Fangyue; Chen, Guanrong Ron; He, Guolong; Xu, Xiubin; He, Qinbin

    2009-10-01

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.

  11. Real-time implementation of a multispectral mine target detection algorithm

    NASA Astrophysics Data System (ADS)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  12. Implementation of a spiral CT backprojection algorithm on the Cell Broadband Engine processor

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Goddard, Iain; Schuberth, Sebastian; Seebass, Martin

    2006-03-01

    Over the last few decades, the medical imaging community has passionately debated over different approaches to implement reconstruction algorithms for Spiral CT. Numerous alternatives have been proposed. Whether they are approximate, exact or, iterative, those implementations generally include a backprojection step. Specialized compute platforms have been designed to perform this compute-intensive algorithm within a timeframe compatible with hospital-workflow requirements. Solving the performance problem in a cost-effective way had driven designers to use a combination of digital signal processor (DSP) chips, general-purpose processors, application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs). The Cell processor by IBM offers an interesting alternative for implementing the backprojection, especially since it offers a good level of parallelism and vast I/O capabilities. In this paper, we consider the implementation of a straight backprojection algorithm on the Cell processor to design a cost-effective system that matches the performance requirements of clinically deployed systems. The effects on performance of system parameters such as pitch and detector size are also analyzed to determine the ideal system size for modern CT scanners.

  13. Implementation of The LDA Algorithm for Online Validation Based on Face Recognition

    NASA Astrophysics Data System (ADS)

    Zainuddin, Z.; Laswi, A. S.

    2017-01-01

    This paper report work in implementation of computer vision application in face recognition to the on-line validation for distance learning. Face recognition is chosen among many other alternatives of validation because its robustness. The problem with basic validation such as password cannot validate the student in distance learning. This cannot be accepted especially is distance examination. Face recognition algorithm used in this research is Linear Discriminant Analysis (LDA). By using this algorithm, the system capable of recognize the authorized persons about 93% and reject the unauthorized persons 100%.

  14. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  15. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    NASA Astrophysics Data System (ADS)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  16. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  17. Implementation and operation of three fractal measurement algorithms for analysis of remote-sensing data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.

    1993-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.

  18. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  19. A hardware-oriented histogram of oriented gradients algorithm and its VLSI implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyu; An, Fengwei; Nakashima, Ikki; Luo, Aiwen; Chen, Lei; Ishii, Idaku; Jürgen Mattausch, Hans

    2017-04-01

    A challenging and important issue for object recognition is feature extraction on embedded systems. We report a hardware implementation of the histogram of oriented gradients (HOG) algorithm for real-time object recognition, which is known to provide high efficiency and accuracy. The developed hardware-oriented algorithm exploits the cell-based scan strategy which enables image-sensor synchronization and extraction-speed acceleration. Furthermore, buffers for image frames or integral images are avoided. An image-size scalable hardware architecture with an effective bin-decoder and a parallelized voting element (PVE) is developed and used to verify the hardware-oriented HOG implementation with the application of human detection. The fabricated test chip in 180 nm CMOS technology achieves fast processing speed and large flexibility for different image resolutions with substantially reduced hardware cost and energy consumption.

  20. An Optional Threshold with Svm Cloud Detection Algorithm and Dsp Implementation

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Zhou, Xiang; Yue, Tao; Liu, Yilong

    2016-06-01

    This paper presents a method which combines the traditional threshold method and SVM method, to detect the cloud of Landsat-8 images. The proposed method is implemented using DSP for real-time cloud detection. The DSP platform connects with emulator and personal computer. The threshold method is firstly utilized to obtain a coarse cloud detection result, and then the SVM classifier is used to obtain high accuracy of cloud detection. More than 200 cloudy images from Lansat-8 were experimented to test the proposed method. Comparing the proposed method with SVM method, it is demonstrated that the cloud detection accuracy of each image using the proposed algorithm is higher than those of SVM algorithm. The results of the experiment demonstrate that the implementation of the proposed method on DSP can effectively realize the real-time cloud detection accurately.

  1. A new morphological anomaly detection algorithm for hyperspectral images and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2011-10-01

    Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we develop a new morphological algorithm for anomaly detection in hyperspectral images along with an efficient GPU implementation of the algorithm. The algorithm is implemented on latest-generation GPU architectures, and evaluated with regards to other anomaly detection algorithms using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex. The proposed GPU implementation achieves real-time performance in the considered case study.

  2. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  3. Implementation of an algorithm for cylindrical object identification using range data

    NASA Technical Reports Server (NTRS)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  4. Development of potential methods for testing congestion control algorithm implemented in vehicle to vehicle communications.

    PubMed

    Hsu, Chung-Jen; Fikentscher, Joshua; Kreeb, Robert

    2017-03-21

    Objective A channel congestion problem might occur when the traffic density increases since the number of basic safety messages carried on the communication channel also increases in vehicle-to-vehicle communications. A remedy algorithm proposed in SAE J2945/1 is designed to address the channel congestion issue by decreasing transmission frequency and radiated power. This study is to develop potential test procedures for evaluating or validating the congestion control algorithm. Methods Simulations of a reference unit transmitting at a higher frequency are implemented to emulate a number of Onboard Equipment (OBE) transmitting at the normal interval of 100 milliseconds (10 Hz). When the transmitting interval is reduced to 1.25 milliseconds (800 Hz), the reference unit emulates 80 vehicles transmitting at 10 Hz. By increasing the number of reference units transmitting at 800 Hz in the simulations, the corresponding channel busy percentages are obtained. An algorithm for GPS data generation of virtual vehicles is developed for facilitating the validation of transmission intervals in the congestion control algorithm. Results Channel busy percentage is the channel busy time over a specified period of time. Three or four reference units are needed to generate channel busy percentages between 50% and 80%, and five reference units can generate channel busy percentages above 80%. The proposed test procedures can verify the operation of congestion control algorithm when channel busy percentages are between 50% and 80%, and above 80%. By using GPS data generation algorithm, the test procedures can also verify the transmission intervals when traffic densities are 80 and 200 vehicles in the radius of 100 m. A suite of test tools with functional requirements is also proposed for facilitating the implementation of test procedures. Conclusions The potential test procedures for congestion control algorithm are developed based on the simulation results of channel busy

  5. Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.; Lawson, Charles L.

    1987-01-01

    The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.

  6. The conception and implementation of a local HDR fusion algorithm depending on contrast and luminosity parameters

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Abdelkefi, Fatma; Siala, Mohamed; Snoussi, Hichem

    2015-09-01

    Nowadays, the high dynamic range (HDR) imaging represents the subject of the most researches. The major problem lies in the implementation of the best algorithm to acquire the best video quality. In fact, the major constraint is to conceive an optimal fusion which must meet the rapid movement of video frames. The implemented merging algorithms were not quick enough to reconstitute the HDR video. In this paper, we detail each of the previous existing works before detailing our algorithm and presenting results from the acquired HDR images, tone mapped with various techniques. Our proposed algorithm guarantees a more enhanced and faster solution compared to the existing ones. In fact, it has the ability to calculate the saturation matrix related to the saturation rate of the neighboring pixels. The computed coefficients are affected respectively to each picture from the tested ones. This analysis provides faster and efficient results in terms of quality and brightness. The originality of our work remains on its processing method including the pixels saturation in the totality of the captured pictures and their combination in order to obtain the best pictures illustrating all the possible details. These parameters are computed for each zone depending on the contrast and the luminosity of the current pixel and its neighboring. The final HDR image's coefficients are calculated dynamically ensuring the best image quality equilibrating the brightness and contrast values and making the perfect final image.

  7. Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining

    NASA Astrophysics Data System (ADS)

    Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.

    2016-11-01

    This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%

  8. Universal perceptron and DNA-like learning algorithm for binary neural networks: non-LSBF implementation.

    PubMed

    Chen, Fangyue; Chen, Guanrong; He, Qinbin; He, Guolong; Xu, Xiubin

    2009-08-01

    Implementing linearly nonseparable Boolean functions (non-LSBF) has been an important and yet challenging task due to the extremely high complexity of this kind of functions and the exponentially increasing percentage of the number of non-LSBF in the entire set of Boolean functions as the number of input variables increases. In this paper, an algorithm named DNA-like learning and decomposing algorithm (DNA-like LDA) is proposed, which is capable of effectively implementing non-LSBF. The novel algorithm first trains the DNA-like offset sequence and decomposes non-LSBF into logic XOR operations of a sequence of LSBF, and then determines the weight-threshold values of the multilayer perceptron (MLP) that perform both the decompositions of LSBF and the function mapping the hidden neurons to the output neuron. The algorithm is validated by two typical examples about the problem of approximating the circular region and the well-known n-bit parity Boolean function (PBF).

  9. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    SciTech Connect

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  10. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  11. Implementing Few-Body Algorithmic Regularization with Post-Newtonian Terms

    NASA Astrophysics Data System (ADS)

    Mikkola, Seppo; Merritt, David

    2008-06-01

    We discuss the implementation of a new regular algorithm for simulation of the gravitational few-body problem. The algorithm uses components from earlier methods, including the chain structure, the logarithmic Hamiltonian, and the time-transformed leapfrog. This algorithmic regularization code, AR-CHAIN, can be used for the normal N-body problem, as well as for problems with softened potentials and/or with velocity-dependent external perturbations, including post-Newtonian terms, which we include up to order PN2.5. Arbitrarily extreme mass ratios are allowed. Only linear coordinate transformations are used and thus the algorithm is somewhat simpler than many earlier regularized schemes. We present the results of performance tests which suggest that the new code is either comparable in performance or superior to the existing regularization schemes based on the Kustaanheimo-Stiefel (KS) transformation. This is true even for the two-body problem, independent of eccentricity. An important advantage of the new method is that, contrary to the older KS-CHAIN code, zero masses are allowed. We use our algorithm to integrate the orbits of the S stars around the Milky Way supermassive black hole for one million years, including PN2.5 terms and an intermediate-mass black hole. The three S stars with shortest periods are observed to escape from the system after a few hundred thousand years.

  12. Implementation and performance of stochastic parallel gradient descent algorithm for atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Finney, Greg A.; Persons, Christopher M.; Henning, Stephan; Hazen, Jessie; Whitley, Daniel

    2014-06-01

    IERUS Technologies, Inc. and the University of Alabama in Huntsville have partnered to perform characterization and development of algorithms and hardware for adaptive optics. To date the algorithm work has focused on implementation of the stochastic parallel gradient descent (SPGD) algorithm. SPGD is a metric-based approach in which a scalar metric is optimized by taking random perturbative steps for many actuators simultaneously. This approach scales to systems with a large number of actuators while maintaining bandwidth, while conventional methods are negatively impacted by the very large matrix multiplications that are required. The metric approach enables the use of higher speed sensors with fewer (or even a single) sensing element(s), enabling a higher control bandwidth. Furthermore, the SPGD algorithm is model-free, and thus is not strongly impacted by the presence of nonlinearities which degrade the performance of conventional phase reconstruction methods. Finally, for high energy laser applications, SPGD can be performed using the primary laser beam without the need for an additional beacon laser. The conventional SPGD algorithm was modified to use an adaptive gain to improve convergence while maintaining low steady state error. Results from laboratory experiments using phase plates as atmosphere surrogates will be presented, demonstrating areas in which the adaptive gain yields better performance and areas which require further investigation.

  13. FPGA implementation of the hyperspectral Lossy Compression for Exomars (LCE) algorithm

    NASA Astrophysics Data System (ADS)

    García, Aday; Santos, L.; López, S.; Callicó, G. M.; López, J. F.; Sarmiento, R.

    2014-10-01

    The increase of data rates and data volumes in present remote sensing payload instruments, together with the restrictions imposed in the downlink connection requirements, represent at the same time a challenge and a must in the field of data and image compression. This is especially true for the case of hyperspectral images, in which both, reduction of spatial and spectral redundancy is mandatory. Recently the Consultative Committee for Space Data Systems (CCSDS) published the Lossless Multispectral and Hyperespectral Image Compression recommendation (CCSDS 123), a prediction-based technique resulted from the consensus of its members. Although this standard offers a good trade-off between coding performance and computational complexity, the appearance of future hyperspectral and ultraspectral sensors with vast amount of data imposes further efforts from the scientific community to ensure optimal transmission to ground stations based on greater compression rates. Furthermore, hardware implementations with specific features to deal with solar radiation problems play an important role in order to achieve real time applications. In this scenario, the Lossy Compression for Exomars (LCE) algorithm emerges as a good candidate to achieve these characteristics. Its good quality/compression ratio together with its low complexity facilitates the implementation in hardware platforms such as FPGAs or ASICs. In this work the authors present the implementation of the LCE algorithm into an antifuse-based FPGA and the optimizations carried out to obtain the RTL description code using CatapultC, a High Level Synthesis (HLS) Tool. Experimental results show an area occupancy of 75% in an RTAX2000 FPGA from Microsemi, with an operating frequency of 18 MHz. Additionally, the power budget obtained is presented giving an idea of the suitability of the proposed algorithm implementation for onboard compression applications.

  14. Software for implementing trigger algorithms on the upgraded CMS Global Trigger System

    NASA Astrophysics Data System (ADS)

    Matsushita, Takashi; Arnold, Bernhard

    2015-12-01

    The Global Trigger is the final step of the CMS Level-1 Trigger and implements a trigger menu, a set of selection requirements applied to the final list of trigger objects. The conditions for trigger object selection, with possible topological requirements on multiobject triggers, are combined by simple combinatorial logic to form the algorithms. The LHC has resumed its operation in 2015, the collision-energy will be increased to 13 TeV with the luminosity expected to go up to 2x1034 cm-2s-1. The CMS Level-1 trigger system will be upgraded to improve its performance for selecting interesting physics events and to operate within the predefined data-acquisition rate in the challenging environment expected at LHC Run 2. The Global Trigger will be re-implemented on modern FPGAs on an Advanced Mezzanine Card in MicroTCA crate. The upgraded system will benefit from the ability to process complex algorithms with DSP slices and increased processing resources with optical links running at 10 Gbit/s, enabling more algorithms at a time than previously possible and allowing CMS to be more flexible in how it handles the trigger bandwidth. In order to handle the increased complexity of the trigger menu implemented on the upgraded Global Trigger, a set of new software has been developed. The software allows a physicist to define a menu with analysis-like triggers using intuitive user interface. The menu is then realised on FPGAs with further software processing, instantiating predefined firmware blocks. The design and implementation of the software for preparing a menu for the upgraded CMS Global Trigger system are presented.

  15. An implementation of the Expectation-Maximisation (EM) algorithm for population pharmacokinetic-pharmacodynamic modelling in ACSLXTREME.

    PubMed

    Yates, James W T

    2009-10-01

    An implementation of the Expectation-Maximisation (EM) algorithm in ACSLXTREME (AEGIS Technologies) for the analyses of population pharmacokinetic-pharmacodynamic (PKPD) data is demonstrated. The parameter estimation results are compared with those from NONMEM (Globomax) using the first order conditional estimate method. The estimates are comparable and it is concluded that the EM algorithm is a useful technique in population pharmacokinetic-pharmacodynamic modelling. The implementation also demonstrates the ease with which parameter estimation algorithms for population data can be implemented in simulation software packages.

  16. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  17. An implementation of differential evolution algorithm for inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Balkaya, Çağlayan

    2013-11-01

    Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.

  18. Next Generation Aura-OMI SO2 Retrieval Algorithm: Introduction and Implementation Status

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay A.; Bhartia, Pawan K.

    2014-01-01

    We introduce our next generation algorithm to retrieve SO2 using radiance measurements from the Aura Ozone Monitoring Instrument (OMI). We employ a principal component analysis technique to analyze OMI radiance spectral in 310.5-340 nm acquired over regions with no significant SO2. The resulting principal components (PCs) capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering, and ozone absorption) and measurement artifacts, enabling us to account for these various interferences in SO2 retrievals. By fitting these PCs along with SO2 Jacobians calculated with a radiative transfer model to OMI-measured radiance spectra, we directly estimate SO2 vertical column density in one step. As compared with the previous generation operational OMSO2 PBL (Planetary Boundary Layer) SO2 product, our new algorithm greatly reduces unphysical biases and decreases the noise by a factor of two, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing long-term, consistent SO2 records for air quality and climate research. We have operationally implemented this new algorithm on OMI SIPS for producing the new generation standard OMI SO2 products.

  19. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  20. Fast Detection Anti-Collision Algorithm for RFID System Implemented On-Chip

    NASA Astrophysics Data System (ADS)

    Sampe, Jahariah; Othman, Masuri

    This study presents a proposed Fast Detection Anti-Collision Algorithm (FDACA) for Radio Frequency Identification (RFID) system. Our proposed FDACA is implemented on-chip using Application Specific Integrated Circuit (ASIC) technology and the algorithm is based on the deterministic anti-collision technique. The FDACA is novel in terms of a faster identification by reducing the number of iterations during the identification process. The primary FDACA also reads the identification (ID) bits at once regardless of its length. It also does not require the tags to remember the instructions from the reader during the communication process in which the tags are treated as address carrying devices only. As a result simple, small, low cost and memoryless tags can be produced. The proposed system is designed using Verilog HDL. The system is simulated using Modelsim XE II and synthesized using Xilinx Synthesis Technology (XST). The system is implemented in hardware using Field Programmable Grid Array (FPGA) board for real time verification. From the verification results it can be shown that the FDACA system enables to identify the tags without error until the operating frequency of 180 MHZ. Finally the FDACA system is implemented on chip using 0.18 μm Library, Synopsys Compiler and tools. From the resynthesis results it shows that the identification rate of the proposed FDACA system is 333 Mega tags per second with the power requirement of 3.451 mW.

  1. The 2016-2100 total solar eclipse prediction by using Meeus Algorithm implemented on MATLAB

    NASA Astrophysics Data System (ADS)

    Melati, A.; Hodijah, S.

    2016-11-01

    The phenomenon of solar and lunar eclipses can be predicted where and when it will happen. The Total Solar Eclipse (TSE) phenomenon on March 09th, 2016 became revival astronomy science in Indonesia and provided public astronomy education. This research aims to predict the total solar eclipse phenomenon from 2016 until 2100. We Used Besselian calculations and Meeus algorithms implemented in MATLAB R2012b software. This methods combine with VSOP087 and ELP2000-82 algorithm. As an example of simulation, TSE prediction on April 20th, 2042 has 0.2 seconds distinction of duration compared with NASA prediction. For the whole data TSE from year of 2016 until 2100 we found 0.04-0.21 seconds differences compared with NASA prediction.

  2. Signal processing algorithms implementing the "smart sensor" concept to improve continuous glucose monitoring in diabetes.

    PubMed

    Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2013-09-01

    Glucose readings provided by current continuous glucose monitoring (CGM) devices still suffer from accuracy and precision issues. In April 2013, we proposed a new conceptual architecture to deal with these problems and render CGM sensors algorithmically smarter, which consists of three modules for denoising, enhancement, and prediction placed in cascade to a commercial CGM sensor. The architecture was assessed on a data set consisting of 24 type 1 diabetes patients collected in four clinical centers of the AP@home Consortium (a European project of 7th Framework Programme funded by the European Committee). This article, as a companion to our prior publication, illustrates the technical details of the algorithms and of the implementation issues.

  3. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang; Lin, Jenglung; Hauer, Matthew L.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliably and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.

  4. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond.

    PubMed

    Morita, Kenji; Jitsev, Jenia; Morrison, Abigail

    2016-09-15

    Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action.

  5. Quality Screening Algorithms Implemented in the New CALIPSO Level 3 Aerosol Profile Product

    NASA Astrophysics Data System (ADS)

    Tackett, J. L.; Winker, D. M.; Getzewich, B. J.; Vaughan, M.

    2012-12-01

    Global observations of aerosol extinction profiles can improve the ability of climate models to properly account for aerosol radiative forcing in Earth's atmosphere. In response to this need, a new CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) level 3 aerosol profile product has been released which for the first time provides monthly, globally gridded and quality-screened aerosol extinction profiles within the troposphere for the entire 6-year mission. Level 3 aerosol extinction profiles are aggregated from CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar extinction retrievals reported in the CALIPSO level 2 aerosol profile product onto an equal-angle grid after quality screening algorithms are applied to reduce occurrences of failed retrievals, misclassified aerosol, surface contamination, and spurious outliers. Implementation of these quality screening algorithms is a substantial value to aerosol modeling groups who desire high confidence datasets without having to independently develop quality screening metrics. Furthermore, quality screening is paramount to understand the scientific content of the resultant CALIPSO level 3 aerosol profile product since classification and retrieval errors in level 2 aerosol data may lead to misinterpretation of the distribution and optical properties of aerosol in the troposphere. This presentation summarizes the averaging and quality screening algorithms implemented in the CALIPSO level 3 aerosol profile product, provides rationale for their implementation, and discusses averaging and filtering differences unique to CALIPSO data compared to level 3 products aggregated from passive satellite measurements. Examples are given that illustrate the benefits of quality screening and the dangers of improper screening CALIPSO level 2 aerosol extinction data. Sensitivity study results are presented to highlight the impact of quality screening on final level 3 statistics. Since overlying cloud

  6. Implementation of a new iterative learning control algorithm on real data

    NASA Astrophysics Data System (ADS)

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome.

  7. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    EPA Science Inventory

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  8. Implementation of Karp-Rabin string matching algorithm in reconfigurable hardware for network intrusion prevention system

    NASA Astrophysics Data System (ADS)

    Botwicz, Jakub; Buciak, Piotr; Sapiecha, Piotr

    2006-03-01

    Intrusion Prevention Systems (IPSs) have become widely recognized as a powerful tool and an important element of IT security safeguards. The essential feature of network IPSs is searching through network packets and matching multiple strings, that are fingerprints of known attacks. String matching is highly resource consuming and also the most significant bottleneck of IPSs. In this article an extension of the classical Karp-Rabin algorithm and its implementation architectures were examined. The result is a software, which generates a source code of a string matching module in hardware description language, that could be easily used to create an Intrusion Prevention System implemented in reconfigurable hardware. The prepared module matches the complete set of Snort IPS signatures achieving throughput of over 2 Gbps on an Altera Stratix I1 evaluation board. The most significant advantage of the proposed architecture is that the update of the patterns database does not require reconfiguration of the circuitry.

  9. Theory and implementation of a fast algorithm linear equalizer. [for multiplication-free data detection

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1981-01-01

    The theory and implementation of a multiplication-free linear mean-square error criterion equalizer for data transmission are considered. For many real-time signal processing situations, a large number of multiplications is objectionable. The linear estimation problem on a binary computer is considered where the estimation parameters are constrained to be powers of two and thus all multiplications are replaced by shifts. The optimal solution is obtained from an integer-programming-like problem except that the allowable discrete points are non-integers. The branch-and-bound algorithm is used to obtain the coefficients of the equalization TDL. Specific experimental performance results are given for an equalizer implemented with a 12 bit A/D device and a 8080 microprocessor.

  10. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  11. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  12. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    NASA Technical Reports Server (NTRS)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  13. From algorithm to implementation: a case study on blind carrier synchronization

    NASA Astrophysics Data System (ADS)

    Schmidt, D.; Brack, T.; Wasenmüller, U.; Wehn, N.

    2006-09-01

    Increasing chip complexities demand a higher design productivity. IP cores, which implement commonly needed operations, can help to dramatically shorten development and verification times for new designs. They often allow for a efficient mapping of algorithmic tasks to a hardware architecture. In this paper we present a novel configurable building block for blind carrier synchronization that features combined frequency and phase offset estimation and an alternative modulation removal that improves communication performance compared to state-of-the-art designs. The used design flow exploits the benefits of IP cores for rapid development times while still offering the designer the full range of optimization possibilities for a specific design. It allowed us to do an almost complete design space exploration, assuring a near-optimal solution to the given problem. The implementation platform is a XILINX Virtex II Pro FPGA.

  14. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    SciTech Connect

    Tian, Zhen E-mail: Xun.Jia@UTSouthwestern.edu Folkerts, Michael; Tan, Jun; Jia, Xun E-mail: Xun.Jia@UTSouthwestern.edu Jiang, Steve B. E-mail: Xun.Jia@UTSouthwestern.edu; Peng, Fei

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  15. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  16. Implementation of the FDK algorithm for cone-beam CT on the cell broadband engine architecture

    NASA Astrophysics Data System (ADS)

    Scherl, Holger; Koerner, Mario; Hofmann, Hannes; Eckert, Wieland; Kowarschik, Markus; Hornegger, Joachim

    2007-03-01

    In most of today's commercially available cone-beam CT scanners, the well known FDK method is used for solving the 3D reconstruction task. The computational complexity of this algorithm prohibits its use for many medical applications without hardware acceleration. The brand-new Cell Broadband Engine Architecture (CBEA) with its high level of parallelism is a cost-efficient processor for performing the FDK reconstruction according to the medical requirements. The programming scheme, however, is quite different to any standard personal computer hardware. In this paper, we present an innovative implementation of the most time-consuming parts of the FDK algorithm: filtering and back-projection. We also explain the required transformations to parallelize the algorithm for the CBEA. Our software framework allows to compute the filtering and back-projection in parallel, making it possible to do an on-the-fly-reconstruction. The achieved results demonstrate that a complete FDK reconstruction is computed with the CBEA in less than seven seconds for a standard clinical scenario. Given the fact that scan times are usually much higher, we conclude that reconstruction is finished right after the end of data acquisition. This enables us to present the reconstructed volume to the physician in real-time, immediately after the last projection image has been acquired by the scanning device.

  17. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  18. Implementing Quantum Algorithms with Modular Gates in a Trapped Ion Chain

    NASA Astrophysics Data System (ADS)

    Figgatt, Caroline; Debnath, Shantanu; Linke, Norbert; Landsman, Kevin; Wright, Ken; Monroe, Chris

    2016-05-01

    We present experimental results on quantum algorithms performed using fully modular one- and two-qubit gates in a linear chain of 5 Yb + ions. This is accomplished through arbitrary qubit addressing and manipulation from stimulated Raman transitions driven by a beat note between counter-propagating beams from a pulsed laser. The Raman beam pairs consist of one global beam and a set of counter-propagating individual addressing beams, one for each ion. This provides arbitrary single-qubit rotations as well as arbitrary selection of ion pairs for a fully-connected system of two-qubit modular XX-entangling gates implemented using a pulse-segmentation scheme. We execute controlled-NOT gates with an average fidelity of 97.0% for all 10 possible pairs. Programming arbitrary sequences of gates allows us to construct any quantum algorithm, making this system a universal quantum computer. As an example, we present experimental results for the Bernstein-Vazirani algorithm using 4 control qubits and 1 ancilla, performed with concatenated gates that can be reconfigured to construct all 16 possible oracles, and obtain a process fidelity of 90.3%. This work is supported by the ARO with funding from the IARPA MQCO program and the AFOSR MURI on Quantum Measurement and Verification.

  19. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  20. A time-efficient algorithm for implementing the Catmull-Clark subdivision method

    NASA Astrophysics Data System (ADS)

    Ioannou, G.; Savva, A.; Stylianou, V.

    2015-10-01

    Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.

  1. GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2010-08-01

    Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.

  2. Parallel implementation of the multiple endmember spectral mixture analysis algorithm for hyperspectral unmixing

    NASA Astrophysics Data System (ADS)

    Bernabe, Sergio; Igual, Francisco D.; Botella, Guillermo; Prieto-Matias, Manuel; Plaza, Antonio

    2015-10-01

    In the last decade, the issue of endmember variability has received considerable attention, particularly when each pixel is modeled as a linear combination of endmembers or pure materials. As a result, several models and algorithms have been developed for considering the effect of endmember variability in spectral unmixing and possibly include multiple endmembers in the spectral unmixing stage. One of the most popular approach for this purpose is the multiple endmember spectral mixture analysis (MESMA) algorithm. The procedure executed by MESMA can be summarized as follows: (i) First, a standard linear spectral unmixing (LSU) or fully constrained linear spectral unmixing (FCLSU) algorithm is run in an iterative fashion; (ii) Then, we use different endmember combinations, randomly selected from a spectral library, to decompose each mixed pixel; (iii) Finally, the model with the best fit, i.e., with the lowest root mean square error (RMSE) in the reconstruction of the original pixel, is adopted. However, this procedure can be computationally very expensive due to the fact that several endmember combinations need to be tested and several abundance estimation steps need to be conducted, a fact that compromises the use of MESMA in applications under real-time constraints. In this paper we develop (for the first time in the literature) an efficient implementation of MESMA on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems. Our experiments have been conducted using a simulated data set and the clMAGMA mathematical library. This kind of implementations with the same descriptive language on different architectures are very important in order to actually calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.

  3. Novel algorithm implementations in DARC: the Durham AO real-time controller

    NASA Astrophysics Data System (ADS)

    Basden, Alastair; Bitenc, Urban; Jenkins, David

    2016-07-01

    The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.

  4. Addressing methodological challenges in implementing the nursing home pain management algorithm randomized controlled trial

    PubMed Central

    Ersek, Mary; Polissar, Nayak; Du Pen, Anna; Jablonski, Anita; Herr, Keela; Neradilek, Moni B

    2015-01-01

    Background Unrelieved pain among nursing home (NH) residents is a well-documented problem. Attempts have been made to enhance pain management for older adults, including those in NHs. Several evidence-based clinical guidelines have been published to assist providers in assessing and managing acute and chronic pain in older adults. Despite the proliferation and dissemination of these practice guidelines, research has shown that intensive systems-level implementation strategies are necessary to change clinical practice and patient outcomes within a health-care setting. One promising approach is the embedding of guidelines into explicit protocols and algorithms to enhance decision making. Purpose The goal of the article is to describe several issues that arose in the design and conduct of a study that compared the effectiveness of pain management algorithms coupled with a comprehensive adoption program versus the effectiveness of education alone in improving evidence-based pain assessment and management practices, decreasing pain and depressive symptoms, and enhancing mobility among NH residents. Methods The study used a cluster-randomized controlled trial (RCT) design in which the individual NH was the unit of randomization. The Roger's Diffusion of Innovations theory provided the framework for the intervention. Outcome measures were surrogate-reported usual pain, self-reported usual and worst pain, and self-reported pain-related interference with activities, depression, and mobility. Results The final sample consisted of 485 NH residents from 27 NHs. The investigators were able to use a staggered enrollment strategy to recruit and retain facilities. The adaptive randomization procedures were successful in balancing intervention and control sites on key NH characteristics. Several strategies were successfully implemented to enhance the adoption of the algorithm. Limitations/Lessons The investigators encountered several methodological challenges that were inherent to

  5. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    NASA Technical Reports Server (NTRS)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  6. Refinements and practical implementation of a power based loss of grid detection algorithm for embedded generators

    NASA Astrophysics Data System (ADS)

    Barrett, James

    The incorporation of small, privately owned generation operating in parallel with, and supplying power to, the distribution network is becoming more widespread. This method of operation does however have problems associated with it. In particular, a loss of the connection to the main utility supply which leaves a portion of the utility load connected to the embedded generator will result in a power island. This situation presents possible dangers to utility personnel and the public, complications for smooth system operation and probable plant damage should the two systems be reconnected out-of-synchronism. Loss of Grid (or Islanding), as this situation is known, is the subject of this thesis. The work begins by detailing the requirements for operation of generation embedded in the utility supply with particular attention drawn to the requirements for a loss of grid protection scheme. The mathematical basis for a new loss of grid protection algorithm is developed and the inclusion of the algorithm in an integrated generator protection scheme described. A detailed description is given on the implementation of the new algorithm in a microprocessor based relay hardware to allow practical tests on small embedded generation facilities, including an in-house multiple generator test facility. The results obtained from the practical tests are compared with those obtained from simulation studies carried out in previous work and the differences are discussed. The performance of the algorithm is enhanced from the theoretical algorithm developed using the simulation results with simple filtering together with pattern recognition techniques. This provides stability during severe load fluctuations under parallel operation and system fault conditions and improved performance under normal operating conditions and for loss of grid detection. In addition to operating for a loss of grid connection, the algorithm will respond to load fluctuations which occur within a power island

  7. FPGA-based implementation for steganalysis: a JPEG-compatibility algorithm

    NASA Astrophysics Data System (ADS)

    Gutierrez-Fernandez, E.; Portela-García, M.; Lopez-Ongil, C.; Garcia-Valderas, M.

    2013-05-01

    Steganalysis is a process to detect hidden data in cover documents, like digital images, videos, audio files, etc. This is the inverse process of steganography, which is the used method to hide secret messages. The widely use of computers and network technologies make digital files very easy-to-use means for storing secret data or transmitting secret messages through the Internet. Depending on the cover medium used to embed the data, there are different steganalysis methods. In case of images, many of the steganalysis and steganographic methods are focused on JPEG image formats, since JPEG is one of the most common formats. One of the main important handicaps of steganalysis methods is the processing speed, since it is usually necessary to process huge amount of data or it can be necessary to process the on-going internet traffic in real-time. In this paper, a JPEG steganalysis system is implemented in an FPGA in order to speed-up the detection process with respect to software-based implementations and to increase the throughput. In particular, the implemented method is the JPEG-compatibility detection algorithm that is based on the fact that when a JPEG image is modified, the resulting image is incompatible with the JPEG compression process.

  8. Dissipative Particle Dynamics Simulations at Extreme Scale: GPU Algorithms, Implementation and Applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George; Crunch Team

    2014-03-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.

  9. An Efficient Implementation of the Sign LMS Algorithm Using Block Floating Point Format

    NASA Astrophysics Data System (ADS)

    Chakraborty, Mrityunjoy; Shaik, Rafiahamed; Lee, Moon Ho

    2007-12-01

    An efficient scheme is presented for implementing the sign LMS algorithm in block floating point format, which permits processing of data over a wide dynamic range at a processor complexity and cost as low as that of a fixed point processor. The proposed scheme adopts appropriate formats for representing the filter coefficients and the data. It also employs a scaled representation for the step-size that has a time-varying mantissa and also a time-varying exponent. Using these and an upper bound on the step-size mantissa, update relations for the filter weight mantissas and exponent are developed, taking care so that neither overflow occurs, nor are quantities which are already very small multiplied directly. Separate update relations are also worked out for the step size mantissa. The proposed scheme employs mostly fixed-point-based operations, and thus achieves considerable speedup over its floating-point-based counterpart.

  10. Dragonfly: an implementation of the expand–maximize–compress algorithm for single-particle imaging1

    PubMed Central

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N. Duane

    2016-01-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand–maximize–compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA. PMID:27504078

  11. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    PubMed

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  12. [An Electroencephalogram-driven Personalized Affective Music Player System: Algorithms and Preliminary Implementation].

    PubMed

    Ma, Yong; Li, Juan; Lu, Bin

    2016-02-01

    In order to monitor the emotional state changes of audience on real-time and to adjust the music playlist, we proposed an algorithm framework of an electroencephalogram (EEG) driven personalized affective music recommendation system based on the portable dry electrode shown in this paper. We also further finished a preliminary implementation on the Android platform. We used a two-dimensional emotional model of arousal and valence as the reference, and mapped the EEG data and the corresponding seed songs to the emotional coordinate quadrant in order to establish the matching relationship. Then, Mel frequency cepstrum coefficients were applied to evaluate the similarity between the seed songs and the songs in music library. In the end, during the music playing state, we used the EEG data to identify the audience's emotional state, and played and adjusted the corresponding song playlist based on the established matching relationship.

  13. Implementation of the Algorithm for Congestion control in the Dynamic Circuit Network (DCN)

    NASA Astrophysics Data System (ADS)

    Nalamwar, H. S.; Ivanov, M. A.; Buddhawar, G. U.

    2017-01-01

    Transport Control Protocol (TCP) incast congestion happens when a number of senders work in parallel with the same server where the high bandwidth and low latency network problem occurs. For many data center network applications such as a search engine, heavy traffic is present on such a server. Incast congestion degrades the entire performance as packets are lost at a server side due to buffer overflow, and as a result, the response time becomes longer. In this work, we focus on TCP throughput, round-trip time (RTT), receive window and retransmission. Our method is based on the proactive adjust of the TCP receive window before the packet loss occurs. We aim to avoid the wastage of the bandwidth by adjusting its size as per the number of packets. To avoid the packet loss, the ICTCP algorithm has been implemented in the data center network (ToR).

  14. Convergence analysis of cascade error projection--an efficient learning algorithm for hardware implementation.

    PubMed

    Duong, T A; Stubberud, A R

    2000-06-01

    In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

  15. Parallel implementation of a hyperspectral data geometry-based estimation of number of endmembers algorithm

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Martin, Gabriel; Botella, Guillermo; Prieto-Matias, Manuel; Plaza, Antonio

    2016-04-01

    In the last years, hyperspectral analysis have been applied in many remote sensing applications. In fact, hyperspectral unmixing has been a challenging task in hyperspectral data exploitation. This process consists of three stages: (i) estimation of the number of pure spectral signatures or endmembers, (ii) automatic identification of the estimated endmembers, and (iii) estimation of the fractional abundance of each endmember in each pixel of the scene. However, unmixing algorithms can be computationally very expensive, a fact that compromises their use in applications under real-time constraints. In recent years, several techniques have been proposed to solve the aforementioned problem but until now, most works have focused on the second and third stages. The execution cost of the first stage is usually lower than the other stages. Indeed, it can be optional if we known a priori this estimation. However, its acceleration on parallel architectures is still an interesting and open problem. In this paper we have addressed this issue focusing on the GENE algorithm, a promising geometry-based proposal introduced in.1 We have evaluated our parallel implementation in terms of both accuracy and computational performance through Monte Carlo simulations for real and synthetic data experiments. Performance results on a modern GPU shows satisfactory 16x speedup factors, which allow us to expect that this method could meet real-time requirements on a fully operational unmixing chain.

  16. ESPRESSO front end guiding algorithms: from design phase to implementation and validation toward the commissioning

    NASA Astrophysics Data System (ADS)

    Landoni, M.; Riva, M.; Pepe, F.; Aliverti, M.; Cabral, A.; Calderone, G.; Cirami, R.; Cristiani, S.; Di Marcantonio, P.; Genoni, M.; Mégevand, D.; Moschetti, M.; Oggioni, L.; Pariani, G.

    2016-08-01

    In this paper we will review the ESPRESSO guiding algorithm for the Front End subsystem. ESPRESSO, the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations, will be installed on ESO's Very Large Telescope (VLT). The Front End Unit (FEU) is the ESPRESSO subsystem which collects the light coming from the Coudè Trains of all the Four Telescope Units (UTs), provides Field and Pupil stabilization better than 0.05'' via piezoelectric tip tilt devices and inject the beams into the Spectrograph fibers. The field and pupil stabilization is obtained through a re-imaging system that collects the halo of the light out of the Injection Fiber and the image of the telescope pupil. In particular, we will focus on the software design of the system starting from class diagram to actual implementation. A review of the theoretical mathematical background required to understand the final design is also reported. We will show the performance of the algorithm on the actual Front End by adoption of telescope simulator exploring various scientific requirements.

  17. Implementation of the Kinetic Plasma Code with Locally Recursive non-Locally Asynchronous Algorithms

    NASA Astrophysics Data System (ADS)

    Perepelkina, A. Yu; Levchenko, V. D.; Goryachev, I. A.

    2014-05-01

    Numerical simulation is presently considered impractical for several relevant plasma kinetics problems due to limitations of computer hardware even with the use of supercomputers. To overcome the existing limitations it is suggested to develop algorithms which would effectively utilize the computer memory subsystem hierarchy to optimize the dependency graph traversal rules. The ideas for general cases of numerical simulation and implementation of such algorithms to particle-in-cell code is discussed in the paper. This approach enables the simulation of previously unaccessible for modeling problems and the execution of series of numerical experiments in reasonable time. The latter is demonstrated on a multiscale problem of the development of filamentation instability in laser interaction with overdense plasma. One variant of the simulation with parameters typical for simulations on supercomputers is performed with the use of one cluster node. The series of such experiments revealed the dependency of energy loss on incoming laser pulse amplitude to be nonmonotonic and reach over 4%, an interesting result for research of fast ignition concept.

  18. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    PubMed

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  19. Design and hardware-in-loop implementation of collision avoidance algorithms for heavy commercial road vehicles

    NASA Astrophysics Data System (ADS)

    Rajaram, Vignesh; Subramanian, Shankar C.

    2016-07-01

    An important aspect from the perspective of operational safety of heavy road vehicles is the detection and avoidance of collisions, particularly at high speeds. The development of a collision avoidance system is the overall focus of the research presented in this paper. The collision avoidance algorithm was developed using a sliding mode controller (SMC) and compared to one developed using linear full state feedback in terms of performance and controller effort. Important dynamic characteristics such as load transfer during braking, tyre-road interaction, dynamic brake force distribution and pneumatic brake system response were considered. The effect of aerodynamic drag on the controller performance was also studied. The developed control algorithms have been implemented on a Hardware-in-Loop experimental set-up equipped with the vehicle dynamic simulation software, IPG/TruckMaker®. The evaluation has been performed for realistic traffic scenarios with different loading and road conditions. The Hardware-in-Loop experimental results showed that the SMC and full state feedback controller were able to prevent the collision. However, when the discrepancies in the form of parametric variations were included, the SMC provided better results in terms of reduced stopping distance and lower controller effort compared to the full state feedback controller.

  20. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs.

    PubMed

    Zhao, Chunhui; Li, Jiawei; Meng, Meiling; Yao, Xifeng

    2017-02-23

    The kernel RX (KRX) detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX) detector and its parallel implementation on graphics processing units (GPUs). The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  1. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    PubMed Central

    Zhao, Chunhui; Li, Jiawei; Meng, Meiling; Yao, Xifeng

    2017-01-01

    The kernel RX (KRX) detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX) detector and its parallel implementation on graphics processing units (GPUs). The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments. PMID:28241511

  2. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  3. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation

    PubMed Central

    Dave, J. K.; Halldorsdottir, V. G.; Eisenbrey, J. R.; Merton, D. A.; Liu, J. B.; Machado, P.; Zhao, H.; Park, S.; Dianis, S.; Chalek, C. L.; Thomenius, K. E.; Brown, D. B.; Forsberg, F.

    2013-01-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0 to 100 % IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6 to 40 %. The best correlation between changes in PV pressures and in subharmonic amplitudes (r = -0.76; p = 0

  4. Experimental realization of a four-photon seven-qubit graph state for one-way quantum computation.

    PubMed

    Lee, Sang Min; Park, Hee Su; Cho, Jaeyoon; Kang, Yoonshik; Lee, Jae Yong; Kim, Heonoh; Lee, Dong-Hoon; Choi, Sang-Kyung

    2012-03-26

    We propose and demonstrate the scaling up of photonic graph states through path qubit fusion. Two path qubits from separate two-photon four-qubit states are fused to generate a two-dimensional seven-qubit graph state composed of polarization and path qubits. Genuine seven-qubit entanglement is verified by evaluating the witness operator. Six qubits from the graph state are used to demonstrate the Deutsch-Jozsa algorithm for general two-bit functions with a success probability greater than 90%.

  5. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  6. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  7. Design and implementation of modern control algorithms for unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hafez, Ahmed Taimour

    Recently, Unmanned Aerial Vehicles (UAVs) have attracted a great deal of attention in academic, civilian and military communities as prospective solutions to a wide variety of applications. The use of cooperative UAVs has received growing interest in the last decade and this provides an opportunity for new operational paradigms. As applications of UAVs continue to grow in complexity, the trend of using multiple cooperative UAVs to perform these applications rises in order to increase the overall effectiveness and robustness. There is a need for generating suitable control techniques that allow for the real-time implementation of control algorithms for different missions and tactics executed by a group of cooperative UAVs. In this thesis, we investigate possible control patterns and associated algorithms for controlling a group of autonomous UAVs in real-time to perform various tactics. This research proposes new control approaches to solve the dynamic encirclement, tactic switching and formation problems for a group of cooperative UAVs in simulation and real-time. Firstly, a combination of Feedback Linearization (FL) and decentralized Linear Model Predictive Control (LMPC) is used to solve the dynamic encirclement problem. Secondly, a combination of decentralized LMPC and fuzzy logic control is used to solve the problem of tactic switching for a group of cooperative UAVs. Finally, a decentralized Learning Based Model Predictive Control (LBMPC) is used to solve the problem of formation for a group of cooperative UAVs in simulation. We show through simulations and validate through experiments that the proposed control policies succeed to control a group of cooperative UAVs to achieve the desired requirements and control objectives for different tactics. These proposed control policies provide reliable and effective control techniques for multiple cooperative UAV systems.

  8. On resampling algorithms for the Meteosat Third Generation rectification: feasibility study for an operational implementation

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Rebeca; Just, Dieter

    2014-10-01

    The Meteosat Third Generation (MTG) Programme is the next generation of European geostationary meteorological systems. The first MTG satellite, which is scheduled for launch at the end of 2018/early 2019, will host two imaging instruments: the Flexible Combined Imager (FCI) and the Lightning Imager. The FCI will continue the operation of the SEVIRI imager on the current Meteosat Second Generation satellites (MSG), but with an improved spatial, temporal and spectral resolution, not dissimilar to GOES-R (of NASA/NOAA). The transition from spinner to 3-axis stabilised platform, a 2-axis tapered scan pattern with overlaps between adjacent scan swaths, and the more stringent geometric, radiometric and timeliness requirements, make the rectification process for MTG FCI more challenging than for MSG SEVIRI. The effect of non-uniform sampling in the image rectification process was analysed in an earlier paper. The use of classical interpolation methods, such as truncated Shannon interpolation or cubic convolution interpolation, was shown to cause significant errors when applied to non-uniform samples. Moreover, cubic splines and Lagrange interpolation were selected as candidate resampling algorithms for the FCI rectification that can cope with irregularities in the sampling acquisition process. This paper extends the study for the two-dimensional case focusing on practical 2D interpolation methods and its feasibility for an operational implementation. Candidate kernels are described and assessed with respect to MTG requirements. The operational constraints of the Level 1 processor have been considered to develop an early image rectification prototype, including the impact of the potential curvature of the FCI scan swaths. The implementation follows a swath-based approach, uses parallel processing to speed up computation time and allows the selection of a number of resampling functions. Due to the tight time constraints of the FCI level 1 processing chain, focus is both on

  9. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  10. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  11. An Implementation of the Berlekamp-Massey Linear Feedback Shift-Register Synthesis Algorithm in the C Programming Language

    SciTech Connect

    CAMPBELL, PHILIP L.

    1999-08-01

    This report presents an implementation of the Berlekamp-Massey linear feedback shift-register (LFSR) synthesis algorithm in the C programming language. Two pseudo-code versions of the code are given, the operation of LFSRs is explained, C-version of the pseudo-code versions is presented, and the output of the code, when run on two input samples, is shown.

  12. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc.

  13. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  14. Nonlinear estimation algorithm and its optical implementation for target tracking in clutter environment

    NASA Astrophysics Data System (ADS)

    Chun, Joohwan; Kailath, Thomas; Son, Jung-Young

    2000-03-01

    The systems such as infrared search and trackers (IRST's), forward looking infrared systems (FLIR's), sonars, and 2-D radars consist of two functional blocks; a detection unit and a tracker. The detection unit which has matched filters followed by a threshold device generates a set of multiple two-dimensional points or detects at every sampling time. For a radar or sonar, each generated detect has polar coordinates, the range and azimuth while an IRST or FLIR produces detects in cartesian coordinates. In practice, the detection unit always has a non-zero false alarm rate, and therefore, the set of detects usually contains clutter points as well as the target. In this paper, we shall present a new target tracking algorithm for clutter environment applicable to a wide range of tracking systems. More specifically, the two-dimensional tracking problem in clutter environment is solved in the discrete-time Bayes optimal (nonlinear, and non-Gaussian) estimation framework. The proposed method recursively finds the entire probability density functions of the target position and velocity. With our approach, the nonlinear estimation problem is converted into simpler linear convolution operations, which can efficiently be implemented with optical devices such as lenses, CCD's (charge coupled devices), SLM's (spatial light modulators) and films.

  15. Implementation of combined SVM-algorithm and computer-aided perception feedback for pulmonary nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Rannou, Didier; Brennan, Patrick C.

    2012-02-01

    This pilot study examines the effect of a novel decision support system in medical image interpretation. This system is based on combining image spatial frequency properties and eye-tracking data in order to recognize over and under calling errors. Thus, before it can be implemented as a detection aided schema, training is required during which SVMbased algorithm learns to recognize FP from all reported outcomes, and, FN from all unreported prolonged dwelled regions. Eight radiologists inspected 50 PA chest radiographs with the specific task of identifying lung nodules. Twentyfive cases contained CT proven subtle malignant lesions (5-20mm), but prevalence was not known by the subjects, who took part in two sequential reading sessions, the second, without and with support system feedback. MCMR ROC DBM and JAFROC analyses were conducted and demonstrated significantly higher scores following feedback with p values of 0.04, and 0.03 respectively, highlighting significant improvements in radiology performance once feedback was used. This positive effect on radiologists' performance might have important implications for future CAD-system development.

  16. Dealing with change in process choreographies: Design and implementation of propagation algorithms.

    PubMed

    Fdhila, Walid; Indiono, Conrad; Rinderle-Ma, Stefanie; Reichert, Manfred

    2015-04-01

    Enabling process changes constitutes a major challenge for any process-aware information system. This not only holds for processes running within a single enterprise, but also for collaborative scenarios involving distributed and autonomous partners. In particular, if one partner adapts its private process, the change might affect the processes of the other partners as well. Accordingly, it might have to be propagated to concerned partners in a transitive way. A fundamental challenge in this context is to find ways of propagating the changes in a decentralized manner. Existing approaches are limited with respect to the change operations considered as well as their dependency on a particular process specification language. This paper presents a generic change propagation approach that is based on the Refined Process Structure Tree, i.e., the approach is independent of a specific process specification language. Further, it considers a comprehensive set of change patterns. For all these change patterns, it is shown that the provided change propagation algorithms preserve consistency and compatibility of the process choreography. Finally, a proof-of-concept prototype of a change propagation framework for process choreographies is presented. Overall, comprehensive change support in process choreographies will foster the implementation and operational support of agile collaborative process scenarios.

  17. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  18. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  19. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    SciTech Connect

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A; Polsky, Yarom; Bouman, Charlie; Santos-Villalobos, Hector J

    2017-01-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  20. Computationally Efficient Implementation of a Novel Algorithm for the General Unified Threshold Model of Survival (GUTS)

    PubMed Central

    Albert, Carlo; Vogel, Sören

    2016-01-01

    The General Unified Threshold model of Survival (GUTS) provides a consistent mathematical framework for survival analysis. However, the calibration of GUTS models is computationally challenging. We present a novel algorithm and its fast implementation in our R package, GUTS, that help to overcome these challenges. We show a step-by-step application example consisting of model calibration and uncertainty estimation as well as making probabilistic predictions and validating the model with new data. Using self-defined wrapper functions, we show how to produce informative text printouts and plots without effort, for the inexperienced as well as the advanced user. The complete ready-to-run script is available as supplemental material. We expect that our software facilitates novel re-analysis of existing survival data as well as asking new research questions in a wide range of sciences. In particular the ability to quickly quantify stressor thresholds in conjunction with dynamic compensating processes, and their uncertainty, is an improvement that complements current survival analysis methods. PMID:27340823

  1. Implementation of a conjugate gradient algorithm for thermal diffusivity identification in a moving boundaries system

    NASA Astrophysics Data System (ADS)

    Perez, L.; Autrique, L.; Gillet, M.

    2008-11-01

    The aim of this paper is to investigate the thermal diffusivity identification of a multilayered material dedicated to fire protection. In a military framework, fire protection needs to meet specific requirements, and operational protective systems must be constantly improved in order to keep up with the development of new weapons. In the specific domain of passive fire protections, intumescent coatings can be an effective solution on the battlefield. Intumescent materials have the ability to swell up when they are heated, building a thick multi-layered coating which provides efficient thermal insulation to the underlying material. Due to the heat aggressions (fire or explosion) leading to the intumescent phenomena, high temperatures are considered and prevent from linearization of the mathematical model describing the system state evolution. Previous sensitivity analysis has shown that the thermal diffusivity of the multilayered intumescent coating is a key parameter in order to validate the predictive numerical tool and therefore for thermal protection optimisation. A conjugate gradient method is implemented in order to minimise the quadratic cost function related to the error between predicted temperature and measured temperature. This regularisation algorithm is well adapted for a large number of unknown parameters.

  2. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  3. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  4. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    SciTech Connect

    McLoughlin, Kevin

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

  5. Implementation and evaluation of an expectation maximization reconstruction algorithm for gamma emission breast tomosynthesis

    PubMed Central

    Gong, Zongyi; Klanian, Kelly; Patel, Tushita; Sullivan, Olivia; Williams, Mark B.

    2012-01-01

    Purpose: We are developing a dual modality tomosynthesis breast scanner in which x-ray transmission tomosynthesis and gamma emission tomosynthesis are performed sequentially with the breast in a common configuration. In both modalities projection data are obtained over an angular range of less than 180° from one side of the mildly compressed breast resulting in incomplete and asymmetrical sampling. The objective of this work is to implement and evaluate a maximum likelihood expectation maximization (MLEM) reconstruction algorithm for gamma emission breast tomosynthesis (GEBT). Methods: A combination of Monte Carlo simulations and phantom experiments was used to test the MLEM algorithm for GEBT. The algorithm utilizes prior information obtained from the x-ray breast tomosynthesis scan to partially compensate for the incomplete angular sampling and to perform attenuation correction (AC) and resolution recovery (RR). System spatial resolution, image artifacts, lesion contrast, and signal to noise ratio (SNR) were measured as image quality figures of merit. To test the robustness of the reconstruction algorithm and to assess the relative impacts of correction techniques with changing angular range, simulations and experiments were both performed using acquisition angular ranges of 45°, 90° and 135°. For comparison, a single projection containing the same total number of counts as the full GEBT scan was also obtained to simulate planar breast scintigraphy. Results: The in-plane spatial resolution of the reconstructed GEBT images is independent of source position within the reconstructed volume and independent of acquisition angular range. For 45° acquisitions, spatial resolution in the depth dimension (the direction of breast compression) is degraded with increasing source depth (increasing distance from the collimator surface). Increasing the acquisition angular range from 45° to 135° both greatly reduces this depth dependence and improves the average depth

  6. FPGA implementation of Santos-Victor optical flow algorithm for real-time image processing: an useful attempt

    NASA Astrophysics Data System (ADS)

    Cobos Arribas, Pedro; Monasterio Huelin Macia, Felix

    2003-04-01

    A FPGA based hardware implementation of the Santos-Victor optical flow algorithm, useful in robot guidance applications, is described in this paper. The system used to do contains an ALTERA FPGA (20K100), an interface with a digital camera, three VRAM memories to contain the data input and some output memories (a VRAM and a EDO) to contain the results. The system have been used previously to develop and test other vision algorithms, such as image compression, optical flow calculation with differential and correlation methods. The designed system let connect the digital camera, or the FPGA output (results of algorithms) to a PC, throw its Firewire or USB port. The problems take place in this occasion have motivated to adopt another hardware structure for certain vision algorithms with special requirements, that need a very hard code intensive processing.

  7. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    PubMed

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  8. Chlorophyll fluorescence: implementation in the full physics RemoTeC algorithm

    NASA Astrophysics Data System (ADS)

    Hahne, Philipp; Frankenberg, Christian; Hasekamp, Otto; Landgraf, Jochen; Butz, André

    2014-05-01

    Several operating and future satellite missions are dedicated to enhancing our understanding of the carbon cycle. They infer the atmospheric concentrations of carbon dioxide and methane from shortwave infrared absorption spectra of sunlight backscattered from Earth's atmosphere and surface. Exhibiting high spatial and temporal resolution, the inferred gas concentration databases provide valuable information for inverse modelling of source and sink processes at the Earth's surface. However, the inversion of sources and sinks requires highly accurate total column CO2 (XCO2) and CH4 (XCH4) measurements, which remains a challenge. Recently, Frankenberg et al., 2012, showed that - beside XCO2 and XCH4 - chlorophyll fluorescence can be retrieved from sounders such as GOSAT exploiting Fraunhofer lines in the vicinity of the O2 A-band. This has two implications: a) chlorophyll fluorescence itself being a proxy for photosynthetic activity yields new information on carbon cycle processes and b) the neglect of the fluorescence signal can induce errors in the retrieved greenhouse gas concentrations. Our RemoTeC full physics algorithm iteratively retrieves the target gas concentrations XCO2 and XCH4 along with atmospheric scattering properties and other auxiliary parameters. The radiative transfer model (RTM) LINTRAN provides RemoTeC with the single and multiple scattered intensity field and its analytically calculated derivatives. Here, we report on the implementation of a fluorescence light source at the lower boundary of our RTM. Processing three years of GOSAT data, we evaluate the performance of the refined retrieval method. To this end, we compare different retrieval configurations, using the s- and p-polarization detectors independently and combined, and validate to independent data sources.

  9. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    SciTech Connect

    Molley, P.A.

    1991-10-22

    This patent describes an optical architecture implementing the mean-square error correlation algorithm, MSE = {Sigma}(I {minus} R){sup 2} for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s{sub 1}(t) and a time-varying input image signal s{sub 2}(t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I{sub 1}(t) having the form I{sub 1}(t) = A{sub 1}(1 = sq. root 2m{sub 1}s{sub 1}(t)cos (2{pi} f{sub 0}t)) and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by I{sub 2}(t) = A{sub 2}(+2m{sub 2}{sup 2}s{sub 2}{sup 2}(t) {minus} 2 sq. root 2m{sub 2}(t) cos (2{pi}f{sub 0}t)). The time integration of the two signals I{sub 1}(t) and I{sub 2}(t) on the CCD deflector plane produces the result R{tau} of the mean-square error having the form: R({tau}) = A{sub 1}A{sub 2}{l brace}(T) +(2m{sub 2}{sup 2 {integral} s}{sub 2}{sup 2}(t {minus} {tau})dt) {minus} (2m{sub 1}m{sub 2} cos (2{tau}f{sub 0}{tau}) {integral} s{sub 1}(t)s{sub 2}(t {minus} {tau}) dt){r brace}.

  10. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  11. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  12. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    NASA Astrophysics Data System (ADS)

    Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2013-02-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.

  13. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  14. Final Report for Award #DE-SC3956 Separating Algorithm and Implementation via programming Model Injection (SAIMI)

    SciTech Connect

    Strout, Michelle

    2015-08-15

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.

  15. A novel implementation algorithm of asymptotic homogenization for predicting the effective coefficient of thermal expansion of periodic composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Yongcun; Shang, Shipeng; Liu, Shutian

    2017-01-01

    Asymptotic homogenization (AH) is a general method for predicting the effective coefficient of thermal expansion (CTE) of periodic composites. It has a rigorous mathematical foundation and can give an accurate solution if the macrostructure is large enough to comprise an infinite number of unit cells. In this paper, a novel implementation algorithm of asymptotic homogenization (NIAH) is developed to calculate the effective CTE of periodic composite materials. Compared with the previous implementation of AH, there are two obvious advantages. One is its implementation as simple as representative volume element (RVE). The new algorithm can be executed easily using commercial finite element analysis (FEA) software as a black box. The detailed process of the new implementation of AH has been provided. The other is that NIAH can simultaneously use more than one element type to discretize a unit cell, which can save much computational cost in predicting the CTE of a complex structure. Several examples are carried out to demonstrate the effectiveness of the new implementation. This work is expected to greatly promote the widespread use of AH in predicting the CTE of periodic composite materials.

  16. An optical model for translucent volume rendering and its implementation using the preintegrated shear-warp algorithm.

    PubMed

    Li, Bin; Tian, Lianfang; Ou, Shanxing

    2010-01-01

    In order to efficiently and effectively reconstruct 3D medical images and clearly display the detailed information of inner structures and the inner hidden interfaces between different media, an Improved Volume Rendering Optical Model (IVROM) for medical translucent volume rendering and its implementation using the preintegrated Shear-Warp Volume Rendering algorithm are proposed in this paper, which can be readily applied on a commodity PC. Based on the classical absorption and emission model, effects of volumetric shadows and direct and indirect scattering are also considered in the proposed model IVROM. Moreover, the implementation of the Improved Translucent Volume Rendering Method (ITVRM) integrating the IVROM model, Shear-Warp and preintegrated volume rendering algorithm is described, in which the aliasing and staircase effects resulting from under-sampling in Shear-Warp, are avoided by the preintegrated volume rendering technique. This study demonstrates the superiority of the proposed method.

  17. Design and Implementation of High-Speed Input-Queued Switches Based on a Fair Scheduling Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Qingsheng; Zhao, Hua-An

    To increase both the capacity and the processing speed for input-queued (IQ) switches, we proposed a fair scalable scheduling architecture (FSSA). By employing FSSA comprised of several cascaded sub-schedulers, a large-scale high performance switches or routers can be realized without the capacity limitation of monolithic device. In this paper, we present a fair scheduling algorithm named FSSA_DI based on an improved FSSA where a distributed iteration scheme is employed, the scheduler performance can be improved and the processing time can be reduced as well. Simulation results show that FSSA_DI achieves better performance on average delay and throughput under heavy loads compared to other existing algorithms. Moreover, a practical 64 × 64 FSSA using FSSA_DI algorithm is implemented by four Xilinx Vertex-4 FPGAs. Measurement results show that the data rates of our solution can be up to 800Mbps and the tradeoff between performance and hardware complexity has been solved peacefully.

  18. On implementation of EM-type algorithms in the stochastic models for a matrix computing on GPU

    SciTech Connect

    Gorshenin, Andrey K.

    2015-03-10

    The paper discusses the main ideas of an implementation of EM-type algorithms for computing on the graphics processors and the application for the probabilistic models based on the Cox processes. An example of the GPU’s adapted MATLAB source code for the finite normal mixtures with the expectation-maximization matrix formulas is given. The testing of computational efficiency for GPU vs CPU is illustrated for the different sample sizes.

  19. Implementation of ternary Shor’s algorithm based on vibrational states of an ion in anharmonic potential

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing

    2015-03-01

    It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.

  20. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    DOEpatents

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  1. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    PubMed

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-09

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  2. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  3. Multi-Core Parallel Implementation of Data Filtering Algorithm for Multi-Beam Bathymetry Data

    NASA Astrophysics Data System (ADS)

    Liu, Tianyang; Xu, Weiming; Yin, Xiaodong; Zhao, Xiliang

    In order to improve the multi-beam bathymetry data processing speed, we propose a parallel filtering algorithm based on multi thread technology. The algorithm consists of two parts. The first is the parallel data re-order step, in which the surveying area is divided into a regular grid, and the discrete bathymetry data is arranged into each grid by parallel method. The second part is the parallel filtering step, which involves dividing the grid into blocks and parallel executing filtering process in each block. In the experiment, the speedup of the proposed algorithm reaches to about 3.67 with an 8 core computer. The result shows the method can improve computing efficiency significantly comparing to the traditional algorithm.

  4. User's manual for a fuel-conservative descent planning algorithm implemented on a small programmable calculator

    SciTech Connect

    Vicroy, D.D.

    1984-01-01

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. An explanation and examples of how the algorithm is used, as well as a detailed flow chart and listing of the algorithm are contained.

  5. A Hardware-Implementation-Friendly Pulse-Coupled Neural Network Algorithm for Analog Image-Feature-Generation Circuits

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Shibata, Tadashi

    2007-04-01

    Pulse-coupled neural networks (PCNNs) are biologically inspired algorithms that have been shown to be highly effective for image feature generation. However, conventional PCNNs are software-oriented algorithms that are too complicated to implement as very-large-scale integration (VLSI) hardware. To employ PCNNs in image-feature-generation VLSIs, a hardware-implementation-friendly PCNN is proposed here. By introducing the concepts of exponentially decaying output and a one-branch dendritic tree, the new PCNN eliminates the large number of convolution operators and floating-point multipliers in conventional PCNNs without compromising its performance at image feature generation. As an analog VLSI implementation of the new PCNN, an image-feature-generation circuit is proposed. By employing floating-gate metal-oxide-semiconductor (MOS) technology, the circuit achieves a full voltage-mode implementation of the PCNN in a compact structure. Inheriting the merits of the PCNN, the circuit is capable of generating rotation-independent and translation-independent features for input patterns, which has been verified by SPICE simulation.

  6. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  7. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  8. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  9. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    NASA Astrophysics Data System (ADS)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  10. Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers

    NASA Astrophysics Data System (ADS)

    Viau, C. R.

    2012-06-01

    The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM). The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better understanding of how expendable IRCM can be used to defeat the latest generation of seekers. The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of guided threats. The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time, ratio thresholds and hold time can influence the outcome of an engagement.

  11. Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor

    NASA Astrophysics Data System (ADS)

    Lee, Y. H.; Chahl, J. S.

    2016-10-01

    This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.

  12. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    NASA Astrophysics Data System (ADS)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  13. Efficient implementation and application of the artificial bee colony algorithm to low-dimensional optimization problems

    NASA Astrophysics Data System (ADS)

    von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel

    2014-06-01

    We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.

  14. Implementation of Future Climate Satellite Cloud Algorithms: Case of the GCOM-C/SGLI

    NASA Astrophysics Data System (ADS)

    Dim, J. R.; Murakami, H.; Nakajima, T. Y.; Takamura, T.

    2012-12-01

    The Global Change Observation Mission-Climate/Second Generation GLobal Imager (GCOM-C/SGLI) is a future Earth observation satellite to be launched in 2015. Its major objective is the monitoring of long-term climate changes. A major factor of these changes is the cloud impact. A new cloud algorithm adapted to the spectral characteristics of the GCOM-C/SGLI and the products derived are currently tested. The tests consist of evaluating the performance of the cloud optical thickness (COT) and the cloud particle effective radius (CLER) against simulation data, and equivalent products derived from a compatible satellite, the Terra/MODerate resolution Image Spectrometer (Terra/MODIS). In addition to these tests, the sensitivity of the products derived from this algorithm, to external and internal cloud related parameters, is analyzed. The base-map of the initial data input for this algorithm is made of geometrically corrected radiances of the Advanced Earth Observation Satellite II/GLobal Imager (ADEOS-II/GLI) and the GCOM-C/SGLI simulated radiances. The results of these performance tests, based on timely matching products, show that the GCOM-C/SGLI algorithm performs relatively well for averagely overcast scenes, with an agreement rate of ±20% with the satellite simulation products and the Terra/MODIS COT and CLER. A negative bias is however frequently observed, with the GCOM-C/SGLI retrieved parameters showing higher values at high COT levels. The algorithm also seems less reactive to thin and small particles' clouds mainly in land areas, compared to Terra/MODIS data and the satellite simulation products. Sensitivity to varying ground albedo, cloud phase, cloud structure and cloud location are analyzed to understand the influence of these parameters on the results obtained. Possible consequences of these influences on long-term climate variations and the bases for the improvement of the present algorithm in various cloud types' conditions are discussed.

  15. Transport implementation of the Bernstein-Vazirani algorithm with ion qubits

    NASA Astrophysics Data System (ADS)

    Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.

    2016-08-01

    Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein-Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.

  16. F100 Multivariable Control Synthesis Program. Computer Implementation of the F100 Multivariable Control Algorithm

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.

    1983-01-01

    As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.

  17. Speckle reduction in medical ultrasound: a novel scatterer density weighted nonlinear diffusion algorithm implemented as a neural-network filter.

    PubMed

    Badawi, Ahmed M; Rushdi, Muhammad A

    2006-01-01

    This paper proposes a novel algorithm for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. We propose a nonlinear image diffusion algorithm that incorporates two local parameters of image quality, namely, scatterer density and texture-based contrast in addition to gradient, to weight the nonlinear diffusion process. The scatterer density is proposed to replace the existing traditional measures of quality of the ultrasound diffusion process such as MSE, RMSE, SNR, and PSNR. This novel diffusion filter was then implemented using back propagation neural network for fast parallel processing of volumetric images. The experimental results show that weighting the image diffusion with these parameters produces better noise reduction and produces a better edge detection quality with reasonable computational cost. The proposed filter can be used as a preprocessing phase before applying any ultrasound segmentation or active contour model processes.

  18. Signal Processing Algorithms Implementing the “Smart Sensor” Concept to Improve Continuous Glucose Monitoring in Diabetes

    PubMed Central

    Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2013-01-01

    Glucose readings provided by current continuous glucose monitoring (CGM) devices still suffer from accuracy and precision issues. In April 2013, we proposed a new conceptual architecture to deal with these problems and render CGM sensors algorithmically smarter, which consists of three modules for denoising, enhancement, and prediction placed in cascade to a commercial CGM sensor. The architecture was assessed on a data set consisting of 24 type 1 diabetes patients collected in four clinical centers of the AP@home Consortium (a European project of 7th Framework Programme funded by the European Committee). This article, as a companion to our prior publication, illustrates the technical details of the algorithms and of the implementation issues. PMID:24124959

  19. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  20. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    NASA Astrophysics Data System (ADS)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  1. Implementation of fractional-order electromagnetic potential through a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jesus, Isabel S.; Machado, J. A. Tenreiro

    2009-05-01

    Several phenomena present in electrical systems motivated the development of comprehensive models based on the theory of fractional calculus (FC). Bearing these ideas in mind, in this work are applied the FC concepts to define, and to evaluate, the electrical potential of fractional order, based in a genetic algorithm optimization scheme. The feasibility and the convergence of the proposed method are evaluated.

  2. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  3. Path Tracking for Unmanned Ground Vehicle Navigation: Implementation and Adaptation of the Pure Pursuit Algorithm

    DTIC Science & Technology

    2005-12-01

    a user for a patrol mission. To increase the vehicle’s abilities, other behaviours such as obstacle avoidance, path planning or leader / follower augment...15 5.4 Leader / Follower Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.5 Waypoint Following...Navigation Behaviour - Provide goal directedness in concert with an obstacle avoid- ance algorithm. 3. Leader / Follower - Allow a follower vehicle to

  4. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  5. Non-quantum implementation of quantum computation algorithm using a spatial coding technique

    NASA Astrophysics Data System (ADS)

    Tate, N.; Ogura, Y.; Tanida, J.

    2005-07-01

    Non-quantum implementation of quantum information processing is studied. A spatial coding technique, which is one effective digital optical computing technique, is utilized to implement quantum teleportation efficiently. In the coding, quantum information is represented by the intensity and the phase of elemental cells. Correct operation is confirmed within the proposed scheme, which indicates the effectiveness of the proposed approach and a motive for further investigation.

  6. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  7. Algorithm and implementation of muon trigger and data transmission system for barrel-endcap overlap region of the CMS detector

    NASA Astrophysics Data System (ADS)

    Zabolotny, W. M.; Byszuk, A.

    2016-03-01

    The CMS experiment Level-1 trigger system is undergoing an upgrade. In the barrel-endcap transition region, it is necessary to merge data from 3 types of muon detectors—RPC, DT and CSC. The Overlap Muon Track Finder (OMTF) uses the novel approach to concentrate and process those data in a uniform manner to identify muons and their transversal momentum. The paper presents the algorithm and FPGA firmware implementation of the OMTF and its data transmission system in CMS. It is foreseen that the OMTF will be subject to significant changes resulting from optimization which will be done with the aid of physics simulations. Therefore, a special, high-level, parameterized HDL implementation is necessary.

  8. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  9. Implemented Wavelet Packet Tree based Denoising Algorithm in Bus Signals of a Wearable Sensorarray

    NASA Astrophysics Data System (ADS)

    Schimmack, M.; Nguyen, S.; Mercorelli, P.

    2015-11-01

    This paper introduces a thermosensing embedded system with a sensor bus that uses wavelets for the purposes of noise location and denoising. From the principle of the filter bank the measured signal is separated in two bands, low and high frequency. The proposed algorithm identifies the defined noise in these two bands. With the Wavelet Packet Transform as a method of Discrete Wavelet Transform, it is able to decompose and reconstruct bus input signals of a sensor network. Using a seminorm, the noise of a sequence can be detected and located, so that the wavelet basis can be rearranged. This particularly allows for elimination of any incoherent parts that make up unavoidable measuring noise of bus signals. The proposed method was built based on wavelet algorithms from the WaveLab 850 library of the Stanford University (USA). This work gives an insight to the workings of Wavelet Transformation.

  10. Analogue algorithm for parallel factorization of an exponential number of large integers: II—optical implementation

    NASA Astrophysics Data System (ADS)

    Tamma, Vincenzo

    2016-12-01

    We report a detailed analysis of the optical realization of the analogue algorithm described in the first paper of this series (Tamma in Quantum Inf Process 11128:1190, 2015) for the simultaneous factorization of an exponential number of integers. Such an analogue procedure, which scales exponentially in the context of first-order interference, opens up the horizon to polynomial scaling by exploiting multi-particle quantum interference.

  11. Implementation of a Landing Footprint Algorithm for the HTV-2 and Trajectory Simulations

    NASA Technical Reports Server (NTRS)

    Clark, Casie M.

    2012-01-01

    This presentation details work performed during the Fall 2011 term in the Research Controls and Dynamics Branch at NASA Dryden Flight Research Center. Included is a study on a possible landing footprint algorithm, with direct application to the HTV-2. Also discussed is work in support of the MIPCC effort, which includes optimal trajectory solutions for the F-15A Streak Eagle aircraft and theoretical performance of an F-15A with a MIPCC propulsion system.

  12. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  13. Cantor network, control algorithm, two-dimensional compact structure and its optical implementation

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren; Yin, Yaozu

    1995-12-01

    A compact integrating module technique for packaging a optical multistage Cantor network with a polarization multiplex technique is suggested. The modules have a unique configuration, which is the solid-state combination of a polarization rotator, double birefringent slabs, and a 2 \\times 2 switch array. The design and the fabrication of an eight-channel optical nonblocking Cantor network are demonstrated, and a fast-setup control algorithm is developed. The network systems are easy to assemble and insensitive to environment disturbance.

  14. Implementation of Interaction Algorithm to Non-Matching Discrete Interfaces Between Structure and Fluid Mesh

    NASA Technical Reports Server (NTRS)

    Chen, Shu-Po

    1999-01-01

    This paper presents software for solving the non-conforming fluid structure interfaces in aeroelastic simulation. It reviews the algorithm of interpolation and integration, highlights the flexibility and the user-friendly feature that allows the user to select the existing structure and fluid package, like NASTRAN and CLF3D, to perform the simulation. The presented software is validated by computing the High Speed Civil Transport model.

  15. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  16. Real-time implementation of a traction control algorithm on a scaled roller rig

    NASA Astrophysics Data System (ADS)

    Bosso, N.; Zampieri, N.

    2013-04-01

    Traction control is a very important aspect in railway vehicle dynamics. Its optimisation allows improvement of the performance of a locomotive by working close to the limit of adhesion. On the other hand, in case the adhesion limit is surpassed, the wheels are subjected to heavy wear and there is also a big risk that vibrations in the traction occur. Similar considerations can be made in the case of braking. The development and optimisation of a traction/braking control algorithm is a complex activity, because it is usually performed on a real vehicle on the track, where many uncertainties are present due to environmental conditions and vehicle characteristics. This work shows the use of a scaled roller rig to develop and optimise a traction control algorithm on a single wheelset. Measurements performed on the wheelset are used to estimate the optimal adhesion forces by means of a wheel/rail contact algorithm executed in real time. This allows application of the optimal adhesion force.

  17. Combining algorithms to predict bacterial protein sub-cellular location: Parallel versus concurrent implementations.

    PubMed

    Taylor, Paul D; Attwood, Teresa K; Flower, Darren R

    2006-12-06

    We describe a novel and potentially important tool for candidate subunit vaccine selection through in silico reverse-vaccinology. A set of Bayesian networks able to make individual predictions for specific subcellular locations is implemented in three pipelines with different architectures: a parallel implementation with a confidence level-based decision engine and two serial implementations with a hierarchical decision structure, one initially rooted by prediction between membrane types and another rooted by soluble versus membrane prediction. The parallel pipeline outperformed the serial pipeline, but took twice as long to execute. The soluble-rooted serial pipeline outperformed the membrane-rooted predictor. Assessment using genomic test sets was more equivocal, as many more predictions are made by the parallel pipeline, yet the serial pipeline identifies 22 more of the 74 proteins of known location.

  18. An implementable digital adaptive flight controller designed using stabilized single stage algorithms

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.

    1975-01-01

    Simple mechanical linkages have not solved the many control problems associated with high performance aircraft maneuvering throughout a wide flight envelope. One procedure for retaining uniform handling qualities over such an envelope is to implement a digital adaptive controller. Towards such an implementation an explicit adaptive controller which makes direct use of on-line parameter identification, has been developed and applied to both linearized and nonlinear equations of motion for a typical fighter aircraft. This controller is composed of an on-line weighted least squares parameter identifier, a Kalman state filter, and a model following control law designed using single stage performance indices. Simulation experiments with realistic measurement noise indicate that the proposed adaptive system has the potential for on-board implementation.

  19. Implementation

    EPA Pesticide Factsheets

    Describes elements for the set of activities to ensure that control strategies are put into effect and that air quality goals and standards are fulfilled, permitting programs, and additional resources related to implementation under the Clean Air Act.

  20. Photonic implementation of a neuronal algorithm applicable towards angle of arrival detection and localization.

    PubMed

    Toole, Ryan; Fok, Mable P

    2015-06-15

    A photonic system exemplifying the neurobiological learning algorithm, spike timing dependent plasticity (STDP), is experimentally demonstrated using the cooperative effects of cross gain modulation and nonlinear polarization rotation within an SOA. Furthermore, an STDP-based photonic approach towards the measurement of the angle of arrival (AOA) of a microwave signal is developed, and a three-dimensional AOA localization scheme is explored. Measurement accuracies on the order of tens of centimeters, rivaling that of complex positioning systems that utilize a large distribution of measuring units, are achieved for larger distances and with a simpler setup using just three STDP-based AOA units.

  1. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  2. Analysis and Implementation of Graph Clustering for Digital News Using Star Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Ahdi, A. B.; SW, K. R.; Herdiani, A.

    2017-01-01

    Since Web 2.0 notion emerged and is used extensively by many services in the Internet, we see an unprecedented proliferation of digital news. Those digital news is very rich in term of content and link to other news/sources but lack of category information. This make the user could not easily identify or grouping all the news that they read into set of groups. Naturally, digital news are linked data because every digital new has relation/connection with other digital news/resources. The most appropriate model for linked data is graph model. Graph model is suitable for this purpose due its flexibility in describing relation and its easy-to-understand visualization. To handle the grouping issue, we use graph clustering approach. There are many graph clustering algorithm available, such as MST Clustering, Chameleon, Makarov Clustering and Star Clustering. From all of these options, we choose Star Clustering because this algorithm is more easy-to-understand, more accurate, efficient and guarantee the quality of clusters results. In this research, we investigate the accuracy of the cluster results by comparing it with expert judgement. We got quite high accuracy level, which is 80.98% and for the cluster quality, we got promising result which is 62.87%.

  3. Implementation of a procalcitonin-guided algorithm for antibiotic therapy in the burn intensive care unit

    PubMed Central

    Lavrentieva, A.; Kontou, P.; Soulountsi, V.; Kioumis, J.; Chrysou, O.; Bitzani, M.

    2015-01-01

    Summary The purpose of this study was to examine the hypothesis that an algorithm based on serial measurements of procalcitonin (PCT) allows reduction in the duration of antibiotic therapy compared with empirical rules, and does not result in more adverse outcomes in burn patients with infectious complications. All burn patients requiring antibiotic therapy based on confirmed or highly suspected bacterial infections were eligible. Patients were assigned to either a procalcitonin-guided (study group) or a standard (control group) antibiotic regimen. The following variables were analyzed and compared in both groups: duration of antibiotic treatment, mortality rate, percentage of patients with relapse or superinfection, maximum SOFA score (days 1-28), length of ICU and hospital stay. A total of 46 Burn ICU patients receiving antibiotic therapy were enrolled in this study. In 24 patients antibiotic therapy was guided by daily procalcitonin and clinical assessment. PCT guidance resulted in a smaller antibiotic exposure (10.1±4 vs. 15.3±8 days, p=0.034) without negative effects on clinical outcome characteristics such as mortality rate, percentage of patients with relapse or superinfection, maximum SOFA score, length of ICU and hospital stay. The findings thus show that use of a procalcitonin-guided algorithm for antibiotic therapy in the burn intensive care unit may contribute to the reduction of antibiotic exposure without compromising clinical outcome parameters. PMID:27279801

  4. Implementation of Monte Carlo Tree Search (MCTS) Algorithm in COMBATXXI using JDAFS

    DTIC Science & Technology

    2014-07-31

    Office 255 Sedgewick Rd Ft Leavenworth KS TRAC- MRO DISTRIBUTION STATEMENT: Approved for public release; distribution is unlimited. The implementation of...completed in FY13. The TRADOC Analysis Center - Methods and Research Office (TRAC- MRO ) sponsored this iteration in an attempt to test the feasibility...work completed in FY13.1 The TRADOC Analysis Center - Methods and Research Office (TRAC- MRO ) sponsored this iteration in an attempt to test the

  5. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-08-01

    In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.

  6. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-02-01

    In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.

  7. Long-term power generation expansion planning with short-term demand response: Model, algorithms, implementation, and electricity policies

    NASA Astrophysics Data System (ADS)

    Lohmann, Timo

    Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the

  8. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  9. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  10. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    NASA Astrophysics Data System (ADS)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  11. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  12. Implementation of Satellite Formation Flight Algorithms Using SPHERES Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Mandy, Christophe P.; Sakamoto, Hiraku; Saenz-Otero, Alvar; Miller, David W.

    2007-01-01

    The MIT's Space Systems Laboratory developed the Synchronized Position Hold Engage and Reorient Experimental Satellites (SPHERES) as a risk-tolerant spaceborne facility to develop and mature control, estimation, and autonomy algorithms for distributed satellite systems for applications such as satellite formation flight. Tests performed study interferometric mission-type formation flight maneuvers in deep space. These tests consist of having the satellites trace a coordinated trajectory under tight control that would allow simulated apertures to constructively interfere observed light and measure the resulting increase in angular resolution. This paper focuses on formation initialization (establishment of a formation using limited field of view relative sensors), formation coordination (synchronization of the different satellite s motion) and fuel-balancing among the different satellites.

  13. FPGA-based genetic algorithm implementation for AC chopper fed induction motor

    NASA Astrophysics Data System (ADS)

    Mahendran, S.; Gnanambal, I.; Maheswari, A.

    2016-12-01

    Genetic algorithm (GA)-based harmonic elimination technique is proposed for designing AC chopper. GA is used to calculate optimal firing angles to eliminate lower order harmonics in output voltage. Total harmonic distortion of output voltage is taken for the fitness function used in the GA. Thus, the ratings of the load are not mandatory to be known for calculating the switching angles using proposed technique. For the performance assessment of GA, Newton-Raphson (NR) method is applied in this present work. Simulation results show that the proposed technique is better in terms of less computational complexity and quick convergence. Simulation results were verified by field programmable gate array controller-based prototype. Simulation study and experimental investigations show that the proposed GA method is superior to the conventional methods.

  14. Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.

    PubMed

    Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector

    2016-03-01

    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.

  15. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  16. A new algorithm for determining 3D biplane imaging geometry: theory and implementation

    NASA Astrophysics Data System (ADS)

    Singh, Vikas; Xu, Jinhui; Hoffmann, Kenneth R.; Xu, Guang; Chen, Zhenming; Gopal, Anant

    2005-04-01

    Biplane imaging is a primary method for visual and quantitative assessment of the vasculature. A key problem called Imaging Geometry Determination problem (IGD for short) in this method is to determine the rotation-matrix R and the translation-vector t which relate the two coordinate systems. In this paper, we propose a new approach, called IG-Sieving, to calculate R and t using corresponding points in the two images. Our technique first generates an initial estimate of R and t from the gantry angles of the imaging system, and then optimizes them by solving an optimal-cell-search problem in a 6-D parametric space (three variables defining R plus the three variables of t). To efficiently find the optimal imaging geometry (IG) in 6-D, our approach divides the high dimensional search domain into a set of lower-dimensional regions, thereby reducing the optimal-cell-search problem to a set of optimization problems in 3D sub-spaces. For each such sub-space, our approach first applies efficient computational geometry techniques to identify "possibly-feasible"" IG"s, and then uses a criterion we call fall-in-number to sieve out good IGs. We show that in a bounded number of optimization steps, a (possibly infinite) set of near-optimal IGs can be determined. Simulation results indicate that our method can reconstruct 3D points with average 3D center-of-mass errors of about 0.8cm for input image-data errors as high as 0.1cm. More importantly, our algorithm provides a novel insight into the geometric structure of the solution-space, which could be exploited to significantly improve the accuracy of other biplane algorithms.

  17. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  18. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  19. Implementation of ILLIAC 4 algorithms for multispectral image interpretation. [earth resources data

    NASA Technical Reports Server (NTRS)

    Ray, R. M.; Thomas, J. D.; Donovan, W. E.; Swain, P. H.

    1974-01-01

    Research has focused on the design and partial implementation of a comprehensive ILLIAC software system for computer-assisted interpretation of multispectral earth resources data such as that now collected by the Earth Resources Technology Satellite. Research suggests generally that the ILLIAC 4 should be as much as two orders of magnitude more cost effective than serial processing computers for digital interpretation of ERTS imagery via multivariate statistical classification techniques. The potential of the ARPA Network as a mechanism for interfacing geographically-dispersed users to an ILLIAC 4 image processing facility is discussed.

  20. Parallel Implementation of Dispersive Tsunami Wave Modeling with a Nesting Algorithm for the 2011 Tohoku Tsunami

    NASA Astrophysics Data System (ADS)

    Baba, Toshitaka; Takahashi, Narumi; Kaneda, Yoshiyuki; Ando, Kazuto; Matsuoka, Daisuke; Kato, Toshihiro

    2015-12-01

    Because of improvements in offshore tsunami observation technology, dispersion phenomena during tsunami propagation have often been observed in recent tsunamis, for example the 2004 Indian Ocean and 2011 Tohoku tsunamis. The dispersive propagation of tsunamis can be simulated by use of the Boussinesq model, but the model demands many computational resources. However, rapid progress has been made in parallel computing technology. In this study, we investigated a parallelized approach for dispersive tsunami wave modeling. Our new parallel software solves the nonlinear Boussinesq dispersive equations in spherical coordinates. A variable nested algorithm was used to increase spatial resolution in the target region. The software can also be used to predict tsunami inundation on land. We used the dispersive tsunami model to simulate the 2011 Tohoku earthquake on the Supercomputer K. Good agreement was apparent between the dispersive wave model results and the tsunami waveforms observed offshore. The finest bathymetric grid interval was 2/9 arcsec (approx. 5 m) along longitude and latitude lines. Use of this grid simulated tsunami soliton fission near the Sendai coast. Incorporating the three-dimensional shape of buildings and structures led to improved modeling of tsunami inundation.

  1. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    SciTech Connect

    Wang, J.R.; Davis, T.A.; Lee, G.K.

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  2. Implementation and validation of atmospheric compensation algorithms for Multispectral Thermal Imager (MTI) pipeline processing

    NASA Astrophysics Data System (ADS)

    Balick, Lee K.; Hirsch, Karen L.; McLachlan, Peter M.; Borel, Christoph C.; Clodius, William B.; Villeneuve, Pierre V.

    2000-11-01

    The Multispectral Thermal Imager (MTI) is a satellite system developed by the DoE. It has 10 spectral bands in the reflectance domain and 5 in the thermal IR. It is pointable and, at nadir, provides 5m IFOV in four visible and short near IR bands and 20m IFOV at longer wavelengths. Several of the bands in the reflectance domain were designed to enable quantitative compensation for aerosol effects and water vapor (daytime). These include 3 bands in and adjacent to the 940nm water vapor feature, a band at 1380nm for cirrus cloud detection and a SWIR band with small atmospheric effects. The concepts and development of these techniques have been described in detail at previous SPIE conferences and in journals. This paper describes the adaptation of these algorithms to the MTI automated processing pipeline (standardized level 2 products) for retrieval of aerosol optical depth (and subsequent compensation of reflectance bands for calibration to reflectance) and the atmospheric water vapor content (thermal IR compensation). Input data sources and flow are described. Validation results are presented. Pre-launch validation was performed using images from the NASA AVIRIS hyperspectral imaging sensor flown in the stratosphere on NASA ER-2 aircraft compared to ground based sun photometer and radiosonde measurements from different sources. These data sets span a range of environmental conditions.

  3. Implementation of a cellular neural network-based segmentation algorithm on the bio-inspired vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Grassi, Giuseppe; Vecchio, Pietro; Arik, Sabri; Yalcin, M. Erhan

    2011-01-01

    Based on the cellular neural network (CNN) paradigm, the bio-inspired (bi-i) cellular vision system is a computing platform consisting of state-of-the-art sensing, cellular sensing-processing and digital signal processing. This paper presents the implementation of a novel CNN-based segmentation algorithm onto the bi-i system. The experimental results, carried out for different benchmark video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frame/sec. Comparisons with existing CNN-based methods show that, even though these methods are from two to six times faster than the proposed one, the conceived approach is more accurate and, consequently, represents a satisfying trade-off between real-time requirements and accuracy.

  4. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    PubMed Central

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  5. Implementation of the Hungarian Algorithm to Account for Ligand Symmetry and Similarity in Structure-Based Design

    PubMed Central

    2015-01-01

    False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor–ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 (http://dock.compbio.ucsf.edu). PMID:24410429

  6. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation.

    PubMed

    McDonnell, Mark D; Mohan, Ashutosh; Stricker, Christian

    2013-01-01

    The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  7. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  8. Implementation of the Hungarian algorithm to account for ligand symmetry and similarity in structure-based design.

    PubMed

    Allen, William J; Rizzo, Robert C

    2014-02-24

    False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor-ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 ( http://dock.compbio.ucsf.edu ).

  9. An implementation of the TFQMR - algorithm on a distributed memory machine

    SciTech Connect

    Buecker, M.

    1994-12-31

    A fundamental task of numerical computing is to solve linear systems A{rvec x} = {rvec b}, which can be labeled as (1). Such systems are essential parts of many scientific problems, e.g. finite element methods contain linear systems to approximate partial differential equations. Linear systems can be solved by direct as well as iterative methods. Using a direct method means to factorize the coefficient matrix. A typical and well known direct method is the Gaussian elimination. Direct methods work well as long as the systems remain small. Unfortunately, there is need for the solution of large linear systems. When systems get large direct methods result in enormous computing time because of their complexity. As an example take (1) with a N x N matrix A. Then the solution of (1) can be computed by Gaussian elimination requiring O(N{sup 3}) operations. Additionally, an implementation of a direct method has to take care of the high storage requirement. In contrast to direct methods, iterative methods use successive approximations to obtain more accurate solutions to a linear system at each step. The iteration process generates a sequence of iterates x{sub n} converging to the solution x of (1). The iteration process ends if either x{sub n} fulfills a chosen convergence criterion or breakdowns, i.e. division by zero, occur. Possible breakdowns can be avoided by look-ahead techniques.

  10. Optical implementation of cipher block chaining mode algorithm using phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Jeon, Seok-Hee; Gil, Sang-Keun

    2016-12-01

    We propose an optical design of cipher block chaining (CBC) encryption mode using digital holography, which is implemented by the two-step quadrature phase-shifting digital holographic encryption technique using orthogonal polarization. A block of plain text is encrypted with the encryption key by applying the two-step phase-shifting digital holographic method; then, it is changed into cipher text blocks which are digital holograms. Optically, these digital holograms with the encrypted information are Fourier transform holograms and are recorded onto charge-coupled devices with 256 quantization gray levels. This means that the proposed optical CBC encryption is a scheme that has an analog-type of pseudorandom pattern information in the cipher text, while the conventional electronic CBC encryption is a kind of bitwise block message encryption processed by digital bits. Also, the proposed method enables the cryptosystem to have higher security strength and faster processing than the conventional electronic method because of the large two-dimensional (2-D) array key space and parallel processing. The results of computer simulations verify that the proposed optical CBC encryption design is very effective in CBC mode due to fast and secure optical encryption of 2-D data and shows the feasibility for the CBC encryption mode.

  11. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  12. Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.

    SciTech Connect

    Hart, Darren

    2004-07-01

    MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefronts at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and

  13. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    PubMed

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  14. Conical intersections in solution: formulation, algorithm, and implementation with combined quantum mechanics/molecular mechanics method.

    PubMed

    Cui, Ganglong; Yang, Weitao

    2011-05-28

    The significance of conical intersections in photophysics, photochemistry, and photodissociation of polyatomic molecules in gas phase has been demonstrated by numerous experimental and theoretical studies. Optimization of conical intersections of small- and medium-size molecules in gas phase has currently become a routine optimization process, as it has been implemented in many electronic structure packages. However, optimization of conical intersections of small- and medium-size molecules in solution or macromolecules remains inefficient, even poorly defined, due to large number of degrees of freedom and costly evaluations of gradient difference and nonadiabatic coupling vectors. In this work, based on the sequential quantum mechanics and molecular mechanics (QM/MM) and QM/MM-minimum free energy path methods, we have designed two conical intersection optimization methods for small- and medium-size molecules in solution or macromolecules. The first one is sequential QM conical intersection optimization and MM minimization for potential energy surfaces; the second one is sequential QM conical intersection optimization and MM sampling for potential of mean force surfaces, i.e., free energy surfaces. In such methods, the region where electronic structures change remarkably is placed into the QM subsystem, while the rest of the system is placed into the MM subsystem; thus, dimensionalities of gradient difference and nonadiabatic coupling vectors are decreased due to the relatively small QM subsystem. Furthermore, in comparison with the concurrent optimization scheme, sequential QM conical intersection optimization and MM minimization or sampling reduce the number of evaluations of gradient difference and nonadiabatic coupling vectors because these vectors need to be calculated only when the QM subsystem moves, independent of the MM minimization or sampling. Taken together, costly evaluations of gradient difference and nonadiabatic coupling vectors in solution or

  15. Implementation of a parallel algorithm for thermo-chemical nonequilibrium flow simulations

    NASA Astrophysics Data System (ADS)

    Wong, C. C.; Blottner, F. G.; Payne, J. L.; Soetrisno, M.

    1995-01-01

    Massively parallel (MP) computing is considered to be the future direction of high performance computing. When engineers apply this new MP computing technology to solve large-scale problems, one major interest is what is the maximum problem size that a MP computer can handle. To determine the maximum size, it is important to address the code scalability issue. Scalability implies whether the code can provide an increase in performance proportional to an increase in problem size. If the size of the problem increases, by utilizing more computer nodes, the ideal elapsed time to simulate a problem should not increase much. Hence one important task in the development of the MP computing technology is to ensure scalability. A scalable code is an efficient code. In order to obtain good scaled performance, it is necessary to first have the code optimized for a single node performance before proceeding to a large-scale simulation with a large number of computer nodes. This paper will discuss the implementation of a massively parallel computing strategy and the process of optimization to improve the scaled performance. Specifically, we will look at domain decomposition, resource management in the code, communication overhead, and problem mapping. By incorporating these improvements and adopting an efficient MP computing strategy, an efficiency of about 85% and 96%, respectively, has been achieved using 64 nodes on MP computers for both perfect gas and chemically reactive gas problems. A comparison of the performance between MP computers and a vectorized computer, such as Cray-YMP, will also be presented.

  16. Thermo-mechanical Modelling of Pebble Beds in Fusion Blankets and its Implementation by a Return-Mapping Algorithm

    SciTech Connect

    Gan, Yixiang; Kamlah, Marc

    2008-07-01

    In this investigation, a thermo-mechanical model of pebble beds is adopted and developed based on experiments by Dr. Reimann at Forschungszentrum Karlsruhe (FZK). The framework of the present material model is composed of a non-linear elastic law, the Drucker-Prager-Cap theory, and a modified creep law. Furthermore, the volumetric inelastic strain dependent thermal conductivity of beryllium pebble beds is taken into account and full thermo-mechanical coupling is considered. Investigation showed that the Drucker-Prager-Cap model implemented in ABAQUS can not fulfill the requirements of both the prediction of large creep strains and the hardening behaviour caused by creep, which are of importance with respect to the application of pebble beds in fusion blankets. Therefore, UMAT (user defined material's mechanical behaviour) and UMATHT (user defined material's thermal behaviour) routines are used to re-implement the present thermo-mechanical model in ABAQUS. An elastic predictor radial return mapping algorithm is used to solve the non-associated plasticity iteratively, and a proper tangent stiffness matrix is obtained for cost-efficiency in the calculation. An explicit creep mechanism is adopted for the prediction of time-dependent behaviour in order to represent large creep strains in high temperature. Finally, the thermo-mechanical interactions are implemented in a UMATHT routine for the coupled analysis. The oedometric compression tests and creep tests of pebble beds at different temperatures are simulated with the help of the present UMAT and UMATHT routines, and the comparison between the simulation and the experiments is made. (authors)

  17. Implementation of algorithms to discriminate chemical/biological airbursts from high explosive airbursts utilizing acoustic signatures

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Desai, Sachi; Morcos, Amir

    2006-05-01

    (PAWSS) Limited Objective Experiment (LOE) conducted by Joint Project Manager for Nuclear Biological Contamination Avoidance (JPM NBC CA) and a matrixed team from Edgewood Chemical and Biological Center (ECBC) at ranges exceeding 3km. The details of the fieldtest experiment and real-time implementation/integration of the standalone acoustic sensor system are discussed herein.

  18. Implementation of algorithms to discriminate between chemical/biological airbursts and high explosive airbursts

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Desai, Sachi; Morcos, Amir

    2006-09-01

    (PAWSS) Limited Objective Experiment (LOE) conducted by Joint Project Manager for Nuclear Biological Contamination Avoidance (JPM NBC CA) and a matrixed team from Edgewood Chemical and Biological Center (ECBC) at ranges exceeding 3km. The details of the field-test experiment and real-time implementation/integration of the stand-alone acoustic sensor system are discussed herein.

  19. An implementation of the Levenberg-Marquardt algorithm for simultaneous-energy-gradient fitting using two-layer feed-forward neural networks

    NASA Astrophysics Data System (ADS)

    Nguyen-Truong, Hieu T.; Le, Hung M.

    2015-06-01

    We present in this study a new and robust algorithm for feed-forward neural network (NN) fitting. This method is developed for the application in potential energy surface (PES) construction, in which simultaneous energy-gradient fitting is implemented using the well-established Levenberg-Marquardt (LM) algorithm. Three fitting examples are demonstrated, which include the vibrational PES of H2O, reactive PESs of O3 and ClOOCl. In the three testing cases, our new LM implementation has been shown to work very efficiently. Not only increasing fitting accuracy, it also offers two other advantages: less training iterations are utilized and less data points are required for fitting.

  20. Neural network and fuzzy logic based secondary cells charging algorithm development and the controller architecture for implementation

    NASA Astrophysics Data System (ADS)

    Ullah, Muhammed Zafar

    Neural Network and Fuzzy Logic are the two key technologies that have recently received growing attention in solving real world, nonlinear, time variant problems. Because of their learning and/or reasoning capabilities, these techniques do not need a mathematical model of the system, which may be difficult, if not impossible, to obtain for complex systems. One of the major problems in portable or electric vehicle world is secondary cell charging, which shows non-linear characteristics. Portable-electronic equipment, such as notebook computers, cordless and cellular telephones and cordless-electric lawn tools use batteries in increasing numbers. These consumers demand fast charging times, increased battery lifetime and fuel gauge capabilities. All of these demands require that the state-of-charge within a battery be known. Charging secondary cells Fast is a problem, which is difficult to solve using conventional techniques. Charge control is important in fast charging, preventing overcharging and improving battery life. This research work provides a quick and reliable approach to charger design using Neural-Fuzzy technology, which learns the exact battery charging characteristics. Neural-Fuzzy technology is an intelligent combination of neural net with fuzzy logic that learns system behavior by using system input-output data rather than mathematical modeling. The primary objective of this research is to improve the secondary cell charging algorithm and to have faster charging time based on neural network and fuzzy logic technique. Also a new architecture of a controller will be developed for implementing the charging algorithm for the secondary battery.

  1. Implementation of a New DTSTEP Algorithm for use in RELAP5-3D and PVMEXEC Completion Report

    SciTech Connect

    Dr. George L Mesina

    2010-12-01

    The PVM Coupling methodology for decomposing a complex model into domains onto which individual programs may be applied has proven effective for solving many multi-physics problems. There have been, from the outset, some detailed and/or long-running models that cause the process to fail. This project addressed the PVM coupling issues surrounding the DTSTEP subroutines on RELAP5-3D and PVMEXEC. Some 25 errors are listed in Tables 1 and 18 and in Section 11. These arise from deficiencies in the floating point calculation and testing of time steps, cumulative time, and time targets. The algorithmic replacement of floating point control of these items with integer based timestepping was developed and implemented. The result of the first phase, undertaken by the INL was that all but three of these issues were resolved. Moreover, two conceptual errors in DTSTEP that were not PVM coupling related were discovered and solved. The final, and most difficult three PVM Bettis User Problems, were solved during the Bettis phase of development and debugging. In 8 months since the conclusion of the project, no further DTSTEP related PVM Coupling errors have been reported.

  2. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  3. Suboptimal implementation of diagnostic algorithms and overuse of computed tomography-pulmonary angiography in patients with suspected pulmonary embolism

    PubMed Central

    Alhassan, Sulaiman; Sayf, Alaa Abu; Arsene, Camelia; Krayem, Hicham

    2016-01-01

    BACKGROUND: Majority of our computed tomography-pulmonary angiography (CTPA) scans report negative findings. We hypothesized that suboptimal reliance on diagnostic algorithms contributes to apparent overuse of this test. METHODS: A retrospective review was performed on 2031 CTPA cases in a large hospital system. Investigators retrospectively calculated pretest probability (PTP). Use of CTPA was considered as inappropriate when it was ordered for patients with low PTP without checking D-dimer (DD) or following negative DD. RESULTS: Among the 2031 cases, pulmonary embolism (PE) was found in 7.4% (151 cases). About 1784 patients (88%) were considered “PE unlikely” based on Wells score. Out of those patients, 1084 cases (61%) did not have DD test prior to CTPA. In addition, 78 patients with negative DD underwent unnecessary CTPA; none of them had PE. CONCLUSIONS: The suboptimal implementation of PTP assessment tools can result in the overuse of CTPA, contributing to ineffective utilization of hospital resources, increased cost, and potential harm to patients. PMID:27803751

  4. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  5. ``Spectral implementation'' for creating a labeled pseudo-pure state and the Bernstein-Vazirani algorithm in a four-qubit nuclear magnetic resonance quantum processor

    NASA Astrophysics Data System (ADS)

    Peng, Xinhua; Zhu, Xiwen; Fang, Ximing; Feng, Mang; Liu, Maili; Gao, Kelin

    2004-02-01

    A quantum circuit is introduced to describe the preparation of a labeled pseudo-pure state by multiplet-component excitation scheme which has been experimentally implemented on a 4-qubit nuclear magnetic resonance quantum processor. Meanwhile, we theoretically analyze and numerically investigate the low-power selective single-pulse implementation of a controlled-rotation gate, which manifests its validity in our experiment. Based on the labeled pseudo-pure state prepared, a 3-qubit Bernstein-Vazirani algorithm has been experimentally demonstrated by spectral implementation. The "answers" of the computations are identified from the split peak positions in the spectra of the observer spin, which are equivalent to projective measurements required by the algorithms.

  6. The challenges of implementing and testing two signal processing algorithms for high rep-rate Coherent Doppler Lidar for wind sensing

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2015-05-01

    In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.

  7. Implementation on GPU-based acceleration of the m-line reconstruction algorithm for circle-plus-line trajectory computed tomography

    NASA Astrophysics Data System (ADS)

    Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei

    2016-10-01

    The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The

  8. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  9. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    SciTech Connect

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-05-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller.

  10. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  11. Implementation of a Distributed Adaptive Routing Algorithm on the Intel iPSC (Intel Personal Super-Computer).

    DTIC Science & Technology

    1987-12-01

    algorithms studied are a research network developed by K. Brayer of the Mitre Corporation, Digital Equipment Corporation’s Digital Network Architecture...developed by K. Brayer at Rome Air Devel- opment Center. The second network is Digital Equipment Corporation’s (DEC) Digital Network Architecture (DNA...Network. K. Brayer of the Mitre Corporation developed a research packet switch system that is loop-free and survivable. His algorithm is divided into a

  12. Implementation of the near-field signal redundancy phase-aberration correction algorithm on two-dimensional arrays.

    PubMed

    Li, Yue; Robinson, Brent

    2007-01-01

    Near-field signal-redundancy (NFSR) algorithms for phase-aberration correction have been proposed and experimentally tested for linear and phased one-dimensional arrays. In this paper the performance of an all-row-plus-two-column, two-dimensional algorithm has been analyzed and tested with simulated data sets. This algorithm applies the NFSR algorithm for one-dimensional arrays to all the rows as well as the first and last columns of the array. The results from the two column measurements are used to derive a linear term for each row measurement result. These linear terms then are incorporated into the row results to obtain a two-dimensional phase aberration profile. The ambiguity phase aberration profile, which is the difference between the true and the derived phase aberration profiles, of this algorithm is not linear. Two methods, a trial-and-error method and a diagonal-measurement method, are proposed to linearize the ambiguity profile. The performance of these algorithms is analyzed and tested with simulated data sets.

  13. Effective elastoplastic behavior of two-phase ductile matrix composites: Return mapping algorithm and finite element implementation

    SciTech Connect

    Ju, J.W.; Tseng, K.H.

    1995-12-31

    Discrete numerical integration algorithm is employed to integrate rate equations in the effective elastoplastic model for particle reinforced ductile matrix composites based on probabilistic micromechanical formulations. In particular, the unconditionally stable implicit backward Euler integration algorithm is formulated for elastoplasticity of particle reinforced plastic matrix composites. In addition to the local integration algorithm, in nonlinear finite element methods for boundary value problems, tangent moduli are needed for the global Newton`s iterations. For this purpose, the continuum tangent operator based on the continuous rate equations is derived. In order to preserve the quadratic rate of convergence, the consistent tangent operator is constructed based on the proposed backward Euler integration algorithm. The elastoplastic model is further extended to accommodate the effect of viscosity in the matrix. The extension is based on the method of Duvaut-Lions viscoplasticity. The local integration algorithm and the consistent tangent operator are formulated for particle reinforced viscoplastic matrix composites. Numerical experiments are performed to assess the capability of the proposed integration algorithm and the convergence behavior of various tangent moduli.

  14. Implementation of Digital Signature Using Aes and Rsa Algorithms as a Security in Disposition System af Letter

    NASA Astrophysics Data System (ADS)

    Siregar, H.; Junaeti, E.; Hayatno, T.

    2017-03-01

    Activities correspondence is often used by agencies or companies, so that institutions or companies set up a special division to handle issues related to the letter management. Most of the distribution of letters using electronic media, then the letter should be kept confidential in order to avoid things that are not desirable. Techniques that can be done to meet the security aspect is by using cryptography or by giving a digital signature. The addition of asymmetric and symmetric algorithms, i.e. RSA and AES algorithms, on the digital signature had been done in this study to maintain data security. The RSA algorithm was used during the process of giving digital signature, while the AES algorithm was used during the process of encoding a message that will be sent to the receiver. Based on the research can be concluded that the additions of AES and RSA algorithms on the digital signature meet four objectives of cryptography: Secrecy, Data Integrity, Authentication and Non-repudiation.

  15. OpenCL-library-based implementation of SCLSU algorithm for remotely sensed hyperspectral data exploitation: clMAGMA versus viennaCL

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Botella, Guillermo; Orueta, Carlos; Navarro, José M. R.; Prieto-Matías, Manuel; Plaza, Antonio

    2016-10-01

    In the last decade, hyperspectral spectral unmixing (HSU) analysis have been applied in many remote sensing applications. For this process, the linear mixture model (LMM) has been the most popular tool used to find pure spectral constituents or endmembers and their fractional abundance in each pixel of the data set. The unmixing process consists of three stages: (i) estimation of the number of pure spectral signatures or endmembers, (ii) automatic identification of the estimated endmembers, and (iii) estimation of the fractional abundance of each endmember in each pixel of the scene. However, unmixing algorithms can be very expensive computationally, a fact that compromises their use in applications under real-time constraints. This is, mainly, due to the last two stages in the unmixing process, which are the most consuming ones. In this work, we propose parallel opencl-library- based implementations of the sum-to-one constrained least squares unmixing (P-SCLSU) algorithm to estimate the per-pixel fractional abundances by using mathematical libraries such as clMAGMA or ViennaCL. To the best of our knowledge, this kind of analysis using OpenCL libraries have not been previously conducted in the hyperspectral imaging processing literature, and in our opinion it is very important in order to achieve efficient implementations using parallel routines. The efficacy of our proposed implementations is demonstrated through Monte Carlo simulations for real data experiments and using high performance computing (HPC) platforms such as commodity graphics processing units (GPUs).

  16. Efficient implementation of the Metropolis-Hastings algorithm, with application to the Cormack-Jolly-Seber model

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2008-01-01

    Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack-Jolly-Seber model and its extensions. ?? Springer Science+Business Media, LLC 2007.

  17. Implementation of an Evidence-Based Content Validated Standardized Ostomy Algorithm Tool in Home Care: A Quality Improvement Project.

    PubMed

    Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra

    2017-03-22

    Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.

  18. Implementations of the optimal multigrid algorithm for the cell-centered finite difference on equilateral triangular grids

    SciTech Connect

    Ewing, R.E.; Saevareid, O.; Shen, J.

    1994-12-31

    A multigrid algorithm for the cell-centered finite difference on equilateral triangular grids for solving second-order elliptic problems is proposed. This finite difference is a four-point star stencil in a two-dimensional domain and a five-point star stencil in a three dimensional domain. According to the authors analysis, the advantages of this finite difference are that it is an O(h{sup 2})-order accurate numerical scheme for both the solution and derivatives on equilateral triangular grids, the structure of the scheme is perhaps the simplest, and its corresponding multigrid algorithm is easily constructed with an optimal convergence rate. They are interested in relaxation of the equilateral triangular grid condition to certain general triangular grids and the application of this multigrid algorithm as a numerically reasonable preconditioner for the lowest-order Raviart-Thomas mixed triangular finite element method. Numerical test results are presented to demonstrate their analytical results and to investigate the applications of this multigrid algorithm on general triangular grids.

  19. Comparative analysis of different implementations of a parallel algorithm for automatic target detection and classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio; Plaza, Javier

    2009-08-01

    Automatic target detection in hyperspectral images is a task that has attracted a lot of attention recently. In the last few years, several algoritms have been developed for this purpose, including the well-known RX algorithm for anomaly detection, or the automatic target detection and classification algorithm (ATDCA), which uses an orthogonal subspace projection (OSP) approach to extract a set of spectrally distinct targets automatically from the input hyperspectral data. Depending on the complexity and dimensionality of the analyzed image scene, the target/anomaly detection process may be computationally very expensive, a fact that limits the possibility of utilizing this process in time-critical applications. In this paper, we develop computationally efficient parallel versions of both the RX and ATDCA algorithms for near real-time exploitation of these algorithms. In the case of ATGP, we use several distance metrics in addition to the OSP approach. The parallel versions are quantitatively compared in terms of target detection accuracy, using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center in New York, five days after the terrorist attack of September 11th, 2001, and also in terms of parallel performance, using a massively Beowulf cluster available at NASA's Goddard Space Flight Center in Maryland.

  20. Template characterization and correlation algorithm created from segmentation for the iris biometric authentication based on analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Among the most used biometric signals to set personal security permissions, taker increasingly importance biometric iris recognition based on their textures and images of blood vessels due to the rich in these two unique characteristics that are unique to each individual. This paper presents an implementation of an algorithm characterization and correlation of templates created for biometric authentication based on iris texture analysis programmed on a FPGA (Field Programmable Gate Array), authentication is based on processes like characterization methods based on frequency analysis of the sample, and frequency correlation to obtain the expected results of authentication.

  1. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  2. Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy.

    PubMed

    Wright, Amanda J; Burns, David; Patterson, Brett A; Poland, Simon P; Valentine, Gareth J; Girkin, John M

    2005-05-01

    We report on the introduction of active optical elements into confocal and multiphoton microscopes in order to reduce the sample-induced aberration. Using a flexible membrane mirror as the active element, the beam entering the rear of the microscope objective is altered to produce the smallest point spread function once it is brought to a focus inside the sample. The conventional approach to adaptive optics, commonly used in astronomy, is to utilise a wavefront sensor to determine the required mirror shape. We have developed a technique that uses optimisation algorithms to improve the returned signal without the use of a wavefront sensor. We have investigated a number of possible optimisation methods, covering hill climbing, genetic algorithms, and more random search methods. The system has demonstrated a significant enhancement in the axial resolution of a confocal microscope when imaging at depth within a sample. We discuss the trade-offs of the various approaches adopted, comparing speed with resolution enhancement.

  3. Implementation of a Cyclostationary Spectral Analysis Algorithm on an SRC Reconfigurable Computer for Real-Time Signal Processing

    DTIC Science & Technology

    2008-03-01

    system designed to detect and classify LPI radar signals. Recent developments in radar technology include the employment of LPI capabilities which are...designed to make the task of intercepting radar signals more difficult. Using complex algorithms to process received LPI signals can result in threat...battlefield. Recent developments in radar technology include the employment of low probability of interception ( LPI ) and low probability of detection

  4. An algorithm used for quality criterion automatic measurement of band-pass filters and its device implementation

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Liu, Yan; Yu, Feihong

    2013-08-01

    As a kind of film device, band-pass filter is widely used in pattern recognition, infrared detection, optical fiber communication, etc. In this paper, an algorithm for automatic measurement of band-pass filter quality criterion is proposed based on the proven theory calculation of derivate spectral transmittance of filter formula. Firstly, wavelet transform to reduce spectrum data noises is used. Secondly, combining with the Gaussian curve fitting and least squares method, the algorithm fits spectrum curve and searches the peak. Finally, some parameters for judging band-pass filter quality are figure out. Based on the algorithm, a pipeline for band-pass filters automatic measurement system has been designed that can scan the filter array automatically and display spectral transmittance of each filter. At the same time, the system compares the measuring result with the user defined standards to determine if the filter is qualified or not. The qualified product will be market with green color, and the unqualified product will be marked with red color. With the experiments verification, the automatic measurement system basically realized comprehensive, accurate and rapid measurement of band-pass filter quality and achieved the expected results.

  5. Implementation of advanced feedback control algorithms for controlled resonant magnetic perturbation physics studies on EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.

    2011-06-01

    The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.

  6. The implementation of read-in and decoding a number based on Grover's algorithm in Mn4 SMM

    NASA Astrophysics Data System (ADS)

    Yin, W.; Liang, J. Q.; Yan, Q. W.

    2006-04-01

    Single-molecule magnets (SMM) Mn12 with non-equidistant energy levels are considered the potential systems to perform Grover's algorithm [M.N. Leuenberger, D. Loss, Nature (London) 410 (2001) 789]. Following Leuenberger and Loss, by means of the numerical exact solution to the time-dependent Schrödinger equation, we study the process of read-in and decoding a number based on Grover's algorithm using Mn4 SMM with a half-integer ground state s=9/2. Our results show that the number we decode depends on the amplitude and the duration of the external magnetic a.c. field as well as its relative phases. Moreover, the second-order transverse term E(S+2+S-2) in the spin Hamiltonian does not bring a great change to perform Grover's algorithm based on the scheme proposed by Leuenberger and Loss. However, it will suppress the spin excitation induced by the magnetic a.c. field.

  7. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    PubMed Central

    Sarigiannis, Dimosthenis A.; Karakitsios, Spyros P.; Gotti, Alberto; Papaloukas, Costas L.; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations. PMID:22399936

  8. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  9. The REFLEX project: Comparing different algorithms and implementations for the inversion of a terrestrial ecosystem model against eddy covariance data

    SciTech Connect

    Fox, Andrew; Williams, Mathew; Richardson, Andrew D.; Cameron, David; Gove, Jeffrey H.; Quaife, Tristan; Ricciuto, Daniel M; Reichstein, Markus; Tomelleri, Enrico; Trudinger, Cathy; Van Wijk, Mark T.

    2009-10-01

    We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) ofCO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration,were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving>80% success rate and mean NEE confidence intervals <110 gCm-2 year-1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence

  10. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    competitive on the basis of speed , power, and noise immunity. This circuit is used at 3V and 15mW, but is rated at 8 V and 100 mW. Unpowered...algorithm using these inexact adders and multipliers. In the interests of high- speed simulation and of collecting large sample sizes, these simulations were...2isi. (28) For most applications, the ripple-carry adder is the slowest adder architecture. There are many adders which are optimized for speed , for

  11. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.

  12. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Model Implementation (PART 1)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.

  13. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  14. Implementation of an orographic/nonorographic rainfall classification scheme in the GSMaP algorithm for microwave radiometers

    NASA Astrophysics Data System (ADS)

    Yamamoto, Munehisa K.; Shige, Shoichi

    2015-09-01

    We incorporate an orographic/nonorographic rainfall classification scheme into the Global Satellite Mapping of Precipitation algorithm for passive microwave radiometers. It improves rainfall estimation over the entire Asian region. However, low verification scores over the United States and Mexico result because vertical profiles of rainfall over the Sierra Madre Mountains are high even for orographic rainfall conditions. In this region, lightning activity is vigorous, with large amounts of solid particles such as graupels, occurring with strong convections. Hence, the orographic/nonorographic rainfall classification scheme is switched off for regions where strong lightning activity occurs in the rainfall type database. The revised zonal mean rainfall amounts obtained from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager are now in better agreement with those from the version 7 of the TRMM Precipitation Radar over the United States and Mexico, as well as Asia.

  15. Description of Selected Algorithms and Implementation Details of a Concept-Demonstration Aircraft VOrtex Spacing System (AVOSS)

    NASA Technical Reports Server (NTRS)

    Hinton, David A.

    2001-01-01

    A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.

  16. Deformable image registration with a featurelet algorithm: implementation as a 3D-slicer extension and validation

    NASA Astrophysics Data System (ADS)

    Renner, A.; Furtado, H.; Seppenwoolde, Y.; Birkfellner, W.; Georg, D.

    2016-03-01

    A radiotherapy (RT) treatment can last for several weeks. In that time organ motion and shape changes introduce uncertainty in dose application. Monitoring and quantifying the change can yield a more precise irradiation margin definition and thereby reduce dose delivery to healthy tissue and adjust tumor targeting. Deformable image registration (DIR) has the potential to fulfill this task by calculating a deformation field (DF) between a planning CT and a repeated CT of the altered anatomy. Application of the DF on the original contours yields new contours that can be used for an adapted treatment plan. DIR is a challenging method and therefore needs careful user interaction. Without a proper graphical user interface (GUI) a misregistration cannot be easily detected by visual inspection and the results cannot be fine-tuned by changing registration parameters. To provide a DIR algorithm with such a GUI available for everyone, we created the extension Featurelet-Registration for the open source software platform 3D Slicer. The registration logic is an upgrade of an in-house-developed DIR method, which is a featurelet-based piecewise rigid registration. The so called "featurelets" are equally sized rectangular subvolumes of the moving image which are rigidly registered to rectangular search regions on the fixed image. The output is a deformed image and a deformation field. Both can be visualized directly in 3D Slicer facilitating the interpretation and quantification of the results. For validation of the registration accuracy two deformable phantoms were used. The performance was benchmarked against a demons algorithm with comparable results.

  17. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi.

    PubMed

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-09-15

    The Gauss-Seidel (GS) method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the GS method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here, we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of processes or computing units. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further, we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures.

  18. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  19. The design and implementation of a low-cost GPS-MEMS/INS precision approach algorithm with health monitoring

    NASA Astrophysics Data System (ADS)

    Anderson, Richard P.

    An algorithm for precision approach guidance using GPS and a MicroElectroMechanical Systems/Inertial Navigation System (MEMS/INS) has been developed to meet the Required Navigational Performance (RNP) at a cost that is suitable for General Aviation (GA) applications. This scheme allows for accurate approach guidance (Category I) using Wide Area Augmentation System (WAAS) at locations not served by ILS, MLS or other types of precision landing guidance, thereby greatly expanding the number of useable airports in poor weather. At locations served by a Local Area Augmentation System (LAAS), Category III-like navigation is possible with the novel idea of a Missed Approach Time (MAT) that is similar to a Missed Approach Point (MAP) but not fixed in space. Though certain augmented types of GPS have sufficient precision for approach navigation, its use alone is insufficient to meet RNP due to an inability to monitor loss, degradation or intentional spoofing and meaconing of the GPS signal. A redundant navigation system and a health monitoring system must be added to acquire sufficient reliability, safety and time-to-alert as stated by required navigation performance. An inertial navigation system is the best choice, as it requires no external radio signals and its errors are complementary to GPS. An aiding Kalman filter is used to derive parameters that monitor the correlation between the GPS and MEMS/INS. These approach guidance parameters determines the MAT for a given RNP and provide the pilot or autopilot with proceed/do-not-proceed decision in real time. The enabling technology used to derive the guidance program is a MEMS gyroscope and accelerometer package in conjunction with a single-antenna pseudo-attitude algorithm. To be viable for most GA applications, the hardware must be reasonably priced. The MEMS gyros allows for the first cost-effective INS package to be developed. With lower cost, however, comes higher drift rates and a more dependence on GPS aiding. In

  20. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  1. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    PubMed Central

    Sung, Wen-Tsai; Lin, Jia-Syun

    2013-01-01

    This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  2. Cryptococcal antigen screening and preemptive therapy in patients initiating antiretroviral therapy in resource-limited settings: a proposed algorithm for clinical implementation.

    PubMed

    Jarvis, Joseph N; Govender, Nelesh; Chiller, Tom; Park, Benjamin J; Longley, Nicky; Meintjes, Graeme; Bekker, Linda-Gail; Wood, Robin; Lawn, Stephen D; Harrison, Thomas S

    2012-01-01

    HIV-associated cryptococcal meningitis (CM) is estimated to cause over half a million deaths annually in Africa. Many of these deaths are preventable. Screening patients for subclinical cryptococcal infection at the time of entry into antiretroviral therapy programs using cryptococcal antigen (CRAG) immunoassays is highly effective in identifying patients at risk of developing CM, allowing these patients to then be targeted with "preemptive" therapy to prevent the development of severe disease. Such CRAG screening programs are currently being implemented in a number of countries; however, a strong evidence base and clear guidance on how to manage patients with subclinical cryptococcal infection identified by screening are lacking. We review the available evidence and propose a treatment algorithm for the management of patients with asymptomatic cryptococcal antigenemia.

  3. Implementation of Vertical-Rectification and CNN Models for an Analogic Range-Estimation Algorithm from a Stream of Images

    NASA Astrophysics Data System (ADS)

    Derrouich, Salah; Izumida, Kiichiro; Murao, Kenji; Shiiya, Kazuhisa

    The implementation of autonomous mobile robots in real life environments still has numerous challenges to face. The most crucial problem is real-time decision-making, using appropriate methods with the right hardware. Recovering the three-dimension scene geometry and detecting moving targets simultaneously from a stream of images are important tasks and have wide applicability in the creation of autonomous mobile robots, such as persistent choice of a safe route free of obstacles, targeting objects to avoid collisions, autonomous navigation and robot manipulation. In the present work, we focus on exploiting the robustness of the analogic-array-processing-aspect introduced by the Cellular Nonlinear Network paradigm to develop a real time tracking method for a stream of general signals coming from space-distributed sources for monocular autonomous mobile robots. The motivation for developing the new tracking method is from one hand the matching operation has to be performed in real-time, while from the other hand a 32 bit floating point accuracy is not often required, which, together with a vertical rectification, as an intermediate process to minimize the token relative displacements between two frames, can lead to a robust real-time object tracking system. The technique has been successfully applied to several indoor sequences of images. The results of the simulations are presented and discussed.

  4. Implementation of a data fusion algorithm for RODS, a real-time outbreak and disease surveillance system.

    SciTech Connect

    Brown, Douglas (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA)

    2005-10-01

    Due to the nature of many infectious agents, such as anthrax, symptoms may either take several days to manifest or resemble those of less serious illnesses leading to misdiagnosis. Thus, bioterrorism attacks that include the release of such agents are particularly dangerous and potentially deadly. For this reason, a system is needed for the quick and correct identification of disease outbreaks. The Real-time Outbreak Disease Surveillance System (RODS), initially developed by Carnegie Mellon University and the University of Pittsburgh, was created to meet this need. The RODS software implements different classifiers for pertinent health surveillance data in order to determine whether or not an outbreak has occurred. In an effort to improve the capability of RODS at detecting outbreaks, we incorporate a data fusion method. Data fusion is used to improve the results of a single classification by combining the output of multiple classifiers. This paper documents the first stages of the development of a data fusion system that can combine the output of the classifiers included in RODS.

  5. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  6. Development of low-cost ammonia gas sensors and data analysis algorithms to implement a monitoring grid of urban environmental pollutants.

    PubMed

    Chiesa, Maria; Rigoni, Federica; Paderno, Maria; Borghetti, Patrizia; Gagliotti, Giovanna; Bertoni, Maurizio; Ballarin Denti, Antonio; Schiavina, Lorenzo; Goldoni, Andrea; Sangaletti, Luigi

    2012-05-01

    The present study is focused on the implementation of a novel, low cost, urban grid of nanostructured chemresistor gas sensors for ammonia concentration ([NH(3)]) monitoring, with NH(3) being one of the main precursors of secondary fine particulate. Low-cost chemresistor gas sensors based on carbon nanotubes have been developed, their response to [NH(3)] in the 0.17-5.0 ppm range has been tested, and the devices have been properly calibrated under different relative humidity conditions in the 33-63% range. In order to improve the chemresistor selectivity towards [NH(3)], an Expert System, based on fuzzy logic and genetic algorithms, has been developed to extract the atmospheric [NH(3)] (with a sensitivity of a few ppb) from the output signal of a model chemresistor gas sensor exposed to an NO(2), NO(X) and O(3) gas mixture. The concentration of these pollutants that are known to be the most significant interfering compounds during ammonia detection with carbon nanotube gas sensors has been tracked by the ARPA monitoring network in the city of Milan and the historical dataset collected over one year has been used to train the Expert System.

  7. Hardware-Algorithms Co-Design and Implementation of an Analog-to-Information Converter for Biosignals Based on Compressed Sensing.

    PubMed

    Pareschi, Fabio; Albertini, Pierluigi; Frattini, Giovanni; Mangia, Mauro; Rovatti, Riccardo; Setti, Gianluca

    2016-02-01

    We report the design and implementation of an Analog-to-Information Converter (AIC) based on Compressed Sensing (CS). The system is realized in a CMOS 180 nm technology and targets the acquisition of bio-signals with Nyquist frequency up to 100 kHz. To maximize performance and reduce hardware complexity, we co-design hardware together with acquisition and reconstruction algorithms. The resulting AIC outperforms previously proposed solutions mainly thanks to two key features. First, we adopt a novel method to deal with saturations in the computation of CS measurements. This allows no loss in performance even when 60% of measurements saturate. Second, the system is able to adapt itself to the energy distribution of the input by exploiting the so-called rakeness to maximize the amount of information contained in the measurements. With this approach, the 16 measurement channels integrated into a single device are expected to allow the acquisition and the correct reconstruction of most biomedical signals. As a case study, measurements on real electrocardiograms (ECGs) and electromyograms (EMGs) show signals that these can be reconstructed without any noticeable degradation with a compression rate, respectively, of 8 and 10.

  8. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  9. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  10. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems

    PubMed Central

    Yalavarthy, Phaneendra K.; Lynch, Daniel R.; Pogue, Brian W.; Dehghani, Hamid; Paulsen, Keith D.

    2008-01-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman–Morrison–Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg–Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photomultiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2. PMID:18561643

  11. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  12. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    NASA Technical Reports Server (NTRS)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  13. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  14. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  15. Clinical implementation of a digital tomosynthesis-based seed reconstruction algorithm for intraoperative postimplant dose evaluation in low dose rate prostate brachytherapy

    SciTech Connect

    Brunet-Benkhoucha, Malik; Verhaegen, Frank; Lassalle, Stephanie; Beliveau-Nadeau, Dominic; Reniers, Brigitte; Donath, David; Taussky, Daniel; Carrier, Jean-Francois

    2009-11-15

    Purpose: The low dose rate brachytherapy procedure would benefit from an intraoperative postimplant dosimetry verification technique to identify possible suboptimal dose coverage and suggest a potential reimplantation. The main objective of this project is to develop an efficient, operator-free, intraoperative seed detection technique using the imaging modalities available in a low dose rate brachytherapy treatment room. Methods: This intraoperative detection allows a complete dosimetry calculation that can be performed right after an I-125 prostate seed implantation, while the patient is still under anesthesia. To accomplish this, a digital tomosynthesis-based algorithm was developed. This automatic filtered reconstruction of the 3D volume requires seven projections acquired over a total angle of 60 deg. with an isocentric imaging system. Results: A phantom study was performed to validate the technique that was used in a retrospective clinical study involving 23 patients. In the patient study, the automatic tomosynthesis-based reconstruction yielded seed detection rates of 96.7% and 2.6% false positives. The seed localization error obtained with a phantom study is 0.4{+-}0.4 mm. The average time needed for reconstruction is below 1 min. The reconstruction algorithm also provides the seed orientation with an uncertainty of 10 deg. {+-}8 deg. The seed detection algorithm presented here is reliable and was efficiently used in the clinic. Conclusions: When combined with an appropriate coregistration technique to identify the organs in the seed coordinate system, this algorithm will offer new possibilities for a next generation of clinical brachytherapy systems.

  16. Design and implementation of a hybrid genetic algorithm and artificial neural network system for predicting the sizes of unerupted canines and premolars.

    PubMed

    Moghimi, S; Talebi, M; Parisay, I

    2012-08-01

    The aim of this study was to develop a novel hybrid genetic algorithm and artificial neural network (GA-ANN) system for predicting the sizes of unerupted canines and premolars during the mixed dentition period. This study was performed on 106 untreated subjects (52 girls, 54 boys, aged 13-15 years). Data were obtained from dental cast measurements. A hybrid GA-ANN algorithm was developed to find the best reference teeth and the most accurate mapping function. Based on a regression analysis, the strongest correlation was observed between the sum of the mesiodistal widths of the mandibular canines and premolars and the mesiodistal widths of the mandibular first molars and incisors (r = 0.697). In the maxilla, the highest correlation was observed between the sum of the mesiodistal widths of the canines and premolars and the mesiodistal widths of the mandibular first molars and maxillary central incisors (0.742). The hybrid GA-ANN algorithm selected the mandibular first molars and incisors and the maxillary central incisors as the reference teeth for predicting the sum of the mesiodistal widths of the canines and premolars. The prediction error rates and maximum rates of over/underestimation using the hybrid GA-ANN algorithm were smaller than those using linear regression analyses.

  17. Full Monte Carlo and measurement-based overall performance assessment of improved clinical implementation of eMC algorithm with emphasis on lower energy range.

    PubMed

    Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo

    2016-06-01

    New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23.

  18. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  19. Implementation and verification of an enhanced algorithm for the automatic computation of RR-interval series derived from 24 h 12-lead ECGs.

    PubMed

    Hagmair, Stefan; Braunisch, Matthias C; Bachler, Martin; Schmaderer, Christoph; Hasenau, Anna-Lena; Bauer, Axel; Rizas, Kostantinos D; Wassertheurer, Siegfried; Mayer, Christopher C

    2017-01-01

    An important tool in early diagnosis of cardiac dysfunctions is the analysis of electrocardiograms (ECGs) obtained from ambulatory long-term recordings. Heart rate variability (HRV) analysis became a significant tool for assessing the cardiac health. The usefulness of HRV assessment for the prediction of cardiovascular events in end-stage renal disease patients was previously reported. The aim of this work is to verify an enhanced algorithm to obtain an RR-interval time series in a fully automated manner. The multi-lead corrected R-peaks of each ECG lead are used for RR-series computation and the algorithm is verified by a comparison with manually reviewed reference RR-time series. Twenty-four hour 12-lead ECG recordings of 339 end-stage renal disease patients from the ISAR (rISk strAtification in end-stage Renal disease) study were used. Seven universal indicators were calculated to allow for a generalization of the comparison results. The median score of the indicator of synchronization, i.e. intraclass correlation coefficient, was 96.4% and the median of the root mean square error of the difference time series was 7.5 ms. The negligible error and high synchronization rate indicate high similarity and verified the agreement between the fully automated RR-interval series calculated with the AIT Multi-Lead ECGsolver and the reference time series. As a future perspective, HRV parameters calculated on this RR-time series can be evaluated in longitudinal studies to ensure clinical benefit.

  20. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    SciTech Connect

    Williams, P. T.; Dickson, T. L.; Yin, S.

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  1. Reductions in post-hepatectomy liver failure and related mortality after implementation of the LiMAx algorithm in preoperative work-up: a single-centre analysis of 1170 hepatectomies of one or more segments

    PubMed Central

    Jara, Maximilian; Reese, Tim; Malinowski, Maciej; Valle, Erika; Seehofer, Daniel; Puhl, Gero; Neuhaus, Peter; Pratschke, Johann; Stockmann, Martin

    2015-01-01

    Objectives Post-hepatectomy liver failure has a major impact on patient outcome. This study aims to explore the impact of the integration of a novel patient-centred evaluation, the LiMAx algorithm, on perioperative patient outcome after hepatectomy. Methods Trends in perioperative variables and morbidity and mortality rates in 1170 consecutive patients undergoing elective hepatectomy between January 2006 and December 2011 were analysed retrospectively. Propensity score matching was used to compare the effects on morbidity and mortality of the integration of the LiMAx algorithm into clinical practice. Results Over the study period, the proportion of complex hepatectomies increased from 29.1% in 2006 to 37.7% in 2011 (P = 0.034). Similarly, the proportion of patients with liver cirrhosis selected for hepatic surgery rose from 6.9% in 2006 to 11.3% in 2011 (P = 0.039). Despite these increases, rates of post-hepatectomy liver failure fell from 24.7% in 2006 to 9.0% in 2011 (P < 0.001) and liver failure-related postoperative mortality decreased from 4.0% in 2006 to 0.9% in 2011 (P = 0.014). Propensity score matching was associated with reduced rates of post-hepatectomy liver failure [24.7% (n = 77) versus 11.2% (n = 35); P < 0.001] and related mortality [3.8% (n = 12) versus 1.0% (n = 3); P = 0.035]. Conclusions Postoperative liver failure and postoperative liver failure-related mortality decreased in patients undergoing hepatectomy following the implementation of the LiMAx algorithm. PMID:26058324

  2. The Use of Genetic Algorithms as an Inverse Technique to Guide the Design and Implementation of Research at a Test Site in Shelby County, Tennessee

    NASA Astrophysics Data System (ADS)

    Gentry, R. W.

    2002-12-01

    The Shelby Farms test site in Shelby County, Tennessee is being developed to better understand recharge hydraulics to the Memphis aquifer in areas where leakage through an overlying aquitard occurs. The site is unique in that it demonstrates many opportunities for interdisciplinary research regarding environmental tracers, anthropogenic impacts and inverse modeling. The objective of the research funding the development of the test site is to better understand the groundwater hydrology and hydraulics between a shallow alluvial aquifer and the Memphis aquifer given an area of leakage, defined as an aquitard window. The site is situated in an area on the boundary of a highly developed urban area and is currently being used by an agricultural research agency and a local recreational park authority. Also, an abandoned landfill is situated to the immediate south of the window location. Previous research by the USGS determined the location of the aquitard window subsequent to the landfill closure. Inverse modeling using a genetic algorithm approach has identified the likely extents of the area of the window given an interaquifer accretion rate. These results, coupled with additional fieldwork, have been used to guide the direction of the field studies and the overall design of the research project. This additional work has encompassed the drilling of additional monitoring wells in nested groups by rotasonic drilling methods. The core collected during the drilling will provide additional constraints to the physics of the problem that may provide additional help in redefining the conceptual model. The problem is non-unique with respect to the leakage area and accretion rate and further research is being performed to provide some idea of the advective flow paths using a combination of tritium and 3He analyses and geochemistry. The outcomes of the research will result in a set of benchmark data and physical infrastructure that can be used to evaluate other environmental

  3. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  4. An Efficient, FPGA-Based, Cluster Detection Algorithm Implementation for a Strip Detector Readout System in a Time Projection Chamber Polarimeter

    NASA Technical Reports Server (NTRS)

    Gregory, Kyle J.; Hill, Joanne E. (Editor); Black, J. Kevin; Baumgartner, Wayne H.; Jahoda, Keith

    2016-01-01

    A fundamental challenge in a spaceborne application of a gas-based Time Projection Chamber (TPC) for observation of X-ray polarization is handling the large amount of data collected. The TPC polarimeter described uses the APV-25 Application Specific Integrated Circuit (ASIC) to readout a strip detector. Two dimensional photoelectron track images are created with a time projection technique and used to determine the polarization of the incident X-rays. The detector produces a 128x30 pixel image per photon interaction with each pixel registering 12 bits of collected charge. This creates challenging requirements for data storage and downlink bandwidth with only a modest incidence of photons and can have a significant impact on the overall mission cost. An approach is described for locating and isolating the photoelectron track within the detector image, yielding a much smaller data product, typically between 8x8 pixels and 20x20 pixels. This approach is implemented using a Microsemi RT-ProASIC3-3000 Field-Programmable Gate Array (FPGA), clocked at 20 MHz and utilizing 10.7k logic gates (14% of FPGA), 20 Block RAMs (17% of FPGA), and no external RAM. Results will be presented, demonstrating successful photoelectron track cluster detection with minimal impact to detector dead-time.

  5. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Algorithms on ensemble quantum computers.

    PubMed

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  8. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  9. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  10. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  11. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  12. Prediction of aqueous solubility of drug-like molecules using a novel algorithm for automatic adjustment of relative importance of descriptors implemented in counter-propagation artificial neural networks.

    PubMed

    Erić, Slavica; Kalinić, Marko; Popović, Aleksandar; Zloh, Mire; Kuzmanovski, Igor

    2012-11-01

    In this work, we present a novel approach for the development of models for prediction of aqueous solubility, based on the implementation of an algorithm for the automatic adjustment of descriptor's relative importance (AARI) in counter-propagation artificial neural networks (CPANN). Using this approach, the interpretability of the models based on artificial neural networks, which are traditionally considered as "black box" models, was significantly improved. For the development of the model, a data set consisting of 374 diverse drug-like molecules, divided into training (n=280) and test (n=94) sets using self-organizing maps, was used. Heuristic method was applied in preselecting a small number of the most significant descriptors to serve as inputs for CPANN training. The performances of the final model based on 7 descriptors for prediction of solubility were satisfactory for both training (RMSEP(train)=0.668) and test set (RMSEP(test)=0.679). The model was found to be a highly interpretable in terms of solubility, as well as rationalizing structural features that could have an impact on the solubility of the compounds investigated. Therefore, the proposed approach can significantly enhance model usability by giving guidance for structural modifications of compounds with the aim of improving solubility in the early phase of drug discovery.

  13. An Optically Implemented Kalman Filter Algorithm.

    DTIC Science & Technology

    1983-12-01

    Out-of-Plane Coordinate Transformation Geometry . ............. 19 5 Horizontal Inertial Space /FLIR Image Plane Geometrical Relationship...21 6 Vertical Inertial Space /FLIR Image Plane Geometrical Relationship . . . . . . . . .... 22 7 Image Intensity Characteristics...of State Vectors SD(t), FA(t) State- Space Plant Matrices BD(t) Control Input Distribution Matrix (t) Control Input Vector .2D(t), GA(t) Noise Weighting

  14. Hardware Algorithm Implementation for Mission Specific Processing

    DTIC Science & Technology

    2008-03-01

    developed and fabricated before it can get into the hands of the customer. The design to market can take up to 1-2 years depending on the technology and the...positioning systems, radar systems, and different communication devices. These devices, being mobile or not, are required for them to operate in the...field and communicate with Command and Control. For this reason it is essential that these libraries be built and perfected. With any circuit design

  15. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  16. Cobweb/3: A portable implementation

    NASA Technical Reports Server (NTRS)

    Mckusick, Kathleen; Thompson, Kevin

    1990-01-01

    An algorithm is examined for data clustering and incremental concept formation. An overview is given of the Cobweb/3 system and the algorithm on which it is based, as well as the practical details of obtaining and running the system code. The implementation features a flexible user interface which includes a graphical display of the concept hierarchies that the system constructs.

  17. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  18. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  19. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  20. Efficient demultiplexing algorithm for noncontiguous carriers

    NASA Technical Reports Server (NTRS)

    Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.

    1992-01-01

    A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.

  1. Recursive algorithms for vector extrapolation methods

    NASA Technical Reports Server (NTRS)

    Ford, William F.; Sidi, Avram

    1988-01-01

    Three classes of recursion relations are devised for implementing some extrapolation methods for vector sequences. One class of recursion relations can be used to implement methods like the modified minimal polynomial extrapolation and the topological epsilon algorithm; another allows implementation of methods like minimal polynomial and reduced rank extrapolation; while the remaining class can be employed in the implementation of the vector E-algorithm. Operation counts and storage requirements for these methods are also discussed, and some related techniques for special applications are also presented. Included are methods for the rapid evaluations of the vector E-algorithm.

  2. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  3. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  4. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  5. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  6. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  7. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  8. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  9. Space complexity of estimation of distribution algorithms.

    PubMed

    Gao, Yong; Culberson, Joseph

    2005-01-01

    In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.

  10. Protein structure optimization with a "Lamarckian" ant colony algorithm.

    PubMed

    Oakley, Mark T; Richardson, E Grace; Carr, Harriet; Johnston, Roy L

    2013-01-01

    We describe the LamarckiAnt algorithm: a search algorithm that combines the features of a "Lamarckian" genetic algorithm and ant colony optimization. We have implemented this algorithm for the optimization of BLN model proteins, which have frustrated energy landscapes and represent a challenge for global optimization algorithms. We demonstrate that LamarckiAnt performs competitively with other state-of-the-art optimization algorithms.

  11. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  12. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  13. Fast algorithm for computing complex number-theoretic transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  16. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  17. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  18. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  19. Genetic algorithms and the immune system

    SciTech Connect

    Forrest, S. . Dept. of Computer Science); Perelson, A.S. )

    1990-01-01

    Using genetic algorithm techniques we introduce a model to examine the hypothesis that antibody and T cell receptor genes evolved so as to encode the information needed to recognize schemas that characterize common pathogens. We have implemented the algorithm on the Connection Machine for 16,384 64-bit antigens and 512 64-bit antibodies. 8 refs.

  20. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  1. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  2. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  3. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  4. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  5. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  6. High performance FDTD algorithm for GPGPU supercomputers

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2016-10-01

    An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.

  7. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum–classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  8. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called ;ionospheric sounder; (or ;ionosonde;). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  9. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  10. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  11. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  12. Stereoscopic depth perception for robot vision: algorithms and architectures

    SciTech Connect

    Safranek, R.J.; Kak, A.C.

    1983-01-01

    The implementation of depth perception algorithms for computer vision is considered. In automated manufacturing, depth information is vital for tasks such as path planning and 3-d scene analysis. The presentation begins with a survey of computer algorithms for stereoscopic depth perception. The emphasis is on the Marr-Poggio paradigm of human stereo vision and its computer implementation. In addition, a stereo matching algorithm based on the relaxation labelling technique is examined. A computer architecture designed to efficiently implement stereo matching algorithms, an MIMD array interfaced to a global memory, is presented. 9 references.

  13. Modeling and Implementing a Digitally Embedded Maximum Power Point Tracking Algorithm and a Series-Loaded Resonant DC-DC Converter to Integrate a Photovoltaic Array with a Micro-Grid

    DTIC Science & Technology

    2014-09-01

    paralleled, improving the overall efficiency of a PV system by eliminating a central converter and instituting the concept of multiple micro-converters...converter control system was simulated, designed, and implemented in the laboratory. The experimental measurements show that the SLR converter presented is...paralleled, improving the overall efficiency of a PV system by eliminating a central converter and instituting the concept of multiple micro

  14. Implementation Reviews

    NASA Technical Reports Server (NTRS)

    Murphy, Gerald

    2003-01-01

    The point of the implementation review is to prevent problems from occurring later by trying to get our arms around the planning from the start. fmplementation reviews set the tone for management of the project. They establish a teaming relationship (if they are run properly), and they level the playing field instead of setting up turf wars.

  15. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. Next Generation Suspension Dynamics Algorithms

    SciTech Connect

    Schunk, Peter Randall; Higdon, Jonathon; Chen, Steven

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  17. Algorithmic deformation of matrix factorisations

    NASA Astrophysics Data System (ADS)

    Carqueville, Nils; Dowdy, Laura; Recknagel, Andreas

    2012-04-01

    Branes and defects in topological Landau-Ginzburg models are described by matrix factorisations. We revisit the problem of deforming them and discuss various deformation methods as well as their relations. We have implemented these algorithms and apply them to several examples. Apart from explicit results in concrete cases, this leads to a novel way to generate new matrix factorisations via nilpotent substitutions, and to criteria whether boundary obstructions can be lifted by bulk deformations.

  18. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  19. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  20. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  1. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. FPGA-based Klystron linearization implementations in scope of ILC

    SciTech Connect

    Omet, M.; Michizono, S.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.

    2015-01-23

    We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successful implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.

  3. FPGA-based Klystron linearization implementations in scope of ILC

    DOE PAGES

    Omet, M.; Michizono, S.; Matsumoto, T.; ...

    2015-01-23

    We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less

  4. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  5. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  6. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  7. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  8. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  9. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  10. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  11. Accelerated Modular Multiplication Algorithm of Large Word Length Numbers with a Fixed Module

    NASA Astrophysics Data System (ADS)

    Bardis, Nikolaos; Drigas, Athanasios; Markovskyy, Alexander; Vrettaros, John

    A new algorithm is proposed for the software implementation of modular multiplication, which uses pre-computations with a constant module. The developed modular multiplication algorithm provides high performance in comparison with the already known algorithms, and is oriented at the variable value of the module, especially with the software implementation on micro controllers and smart cards with a small number of bits.

  12. Some Improvements on Signed Window Algorithms for Scalar Multiplications in Elliptic Curve Cryptosystems

    NASA Technical Reports Server (NTRS)

    Vo, San C.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.

  13. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  14. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  15. Conjugate gradient algorithms using multiple recursions

    SciTech Connect

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  16. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  17. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  18. Selection of views to materialize using simulated annealing algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  19. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  20. Implementation of multigrid methods for solving Navier-Stokes equations on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Taasan, Shlomo

    1987-01-01

    Presented are schemes for implementing multigrid algorithms on message based MIMD multiprocessor systems. To address the various issues involved, a nontrivial problem of solving the 2-D incompressible Navier-Stokes equations is considered as the model problem. Three different multigrid algorithms are considered. Results from implementing these algorithms on an Intel iPSC are presented.

  1. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  2. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  3. Target classification algorithm based on feature aided tracking

    NASA Astrophysics Data System (ADS)

    Zhan, Ronghui; Zhang, Jun

    2013-03-01

    An effective target classification algorithm based on feature aided tracking (FAT) is proposed, using the length of target (target extent) as the classification information. To implement the algorithm, the Rao-Blackwellised unscented Kalman filter (RBUKF) is used to jointly estimate the kinematic state and target extent; meanwhile the joint probability data association (JPDA) algorithm is exploited to implement multi-target data association aided by target down-range extent. Simulation results under different condition show the presented algorithm is both accurate and robust, and it is suitable for the application of near spaced targets tracking and classification under the environment of dense clutters.

  4. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  5. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  6. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  7. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  8. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  9. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  10. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  11. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  12. An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Ward, R. C.

    1973-01-01

    This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.

  13. Digital signal processing algorithms for automatic voice recognition

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  14. Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms

    SciTech Connect

    Johnson, J R; Foster, I

    2003-05-01

    A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.

  15. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  16. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  17. A Simplified Pattern Match Algorithm for Star Identification

    NASA Technical Reports Server (NTRS)

    Lee, Michael H.

    1996-01-01

    A true pattern matching star algorithm similar in concept to the Van Bezooijen algorithm is implemented using an iterative approach. This approach allows for a more compact and simple implementation which can be easily adapted to be either an all-sky, no a priori algorithm or a follow on to a direct match algorithm to distinguish between ambiguous matches. Some simple analysis is shown to indicate the likelihood of mis-identifications. The performance of the algorithm for the all-sky, no a priori situation is detailed assuming he SKYMAP star catalog describes the true sky. The impact of errors and omissions in the SKYMAP catalog on performance are investigated. In addition, differing levels of noise in the star observations are assumed and results shown. The implications for possible implementation on-board spacecraft are discussed.

  18. Fast box-counting algorithm on GPU.

    PubMed

    Jiménez, J; Ruiz de Miras, J

    2012-12-01

    The box-counting algorithm is one of the most widely used methods for calculating the fractal dimension (FD). The FD has many image analysis applications in the biomedical field, where it has been used extensively to characterize a wide range of medical signals. However, computing the FD for large images, especially in 3D, is a time consuming process. In this paper we present a fast parallel version of the box-counting algorithm, which has been coded in CUDA for execution on the Graphic Processing Unit (GPU). The optimized GPU implementation achieved an average speedup of 28 times (28×) compared to a mono-threaded CPU implementation, and an average speedup of 7 times (7×) compared to a multi-threaded CPU implementation. The performance of our improved box-counting algorithm has been tested with 3D models with different complexity, features and sizes. The validity and accuracy of the algorithm has been confirmed using models with well-known FD values. As a case study, a 3D FD analysis of several brain tissues has been performed using our GPU box-counting algorithm.

  19. Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995

    SciTech Connect

    Leimkuhler, B.; Hermans, J.; Skeel, R.D.

    1995-07-01

    A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.

  20. FOCUS: a deconvolution method based on algorithmic complexity

    NASA Astrophysics Data System (ADS)

    Delgado, C.

    2006-07-01

    A new method for improving the resolution of images is presented. It is based on Occam's razor principle implemented using algorithmic complexity arguments. The performance of the method is illustrated using artificial and real test data.