Science.gov

Sample records for algorithm level re-computing

  1. Level 1 Radiance Scaling and Conditioning Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Diner, D.; Korechoff, R.; Lee, M.

    2000-01-01

    The Algorithm Theoretical Basis (ATB) document describes the algorithms used to produce the Multi-angle Imaging SpectroRadiometer (MISR) Level 1B1 Radiometric Product, and certain parameters of the Level 1A Reformatted Annotated Product.

  2. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  3. Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.

    1999-01-01

    This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.

  4. A three-level BDDC algorithm for Mortar discretizations

    SciTech Connect

    Kim, H.; Tu, X.

    2007-12-09

    In this paper, a three-level BDDC algorithm is developed for the solutions of large sparse algebraic linear systems arising from the mortar discretization of elliptic boundary value problems. The mortar discretization is considered on geometrically non-conforming subdomain partitions. In two-level BDDC algorithms, the coarse problem needs to be solved exactly. However, its size will increase with the increase of the number of the subdomains. To overcome this limitation, the three-level algorithm solves the coarse problem inexactly while a good rate of convergence is maintained. This is an extension of previous work, the three-level BDDC algorithms for standard finite element discretization. Estimates of the condition numbers are provided for the three-level BDDC method and numerical experiments are also discussed.

  5. AIRS Level 1b Algorithm Theoretical Basis Document

    NASA Technical Reports Server (NTRS)

    Aumann, H.; Gregorich, D.; Gaiser, S.; Hagan, D.; Pagano, T.; Ting, D.

    2000-01-01

    The level 1b Algorithm Theoretical Basis Document (ATBD) describes the theoretical bases of the algorithms used to convert the raw detector output (data numbers) from the Atmospheric Infrared Sounder (AIRS), the Advanced Microwave Sounding Unit (AMSU) and Humidity Sounder Brazil (HSB) to physical radiance units and, in the case of AIRS, perform in-orbit spectral calibrations.

  6. The algorithmic level is the bridge between computation and brain.

    PubMed

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. PMID:25823496

  7. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  8. The Algorithm Theoretical Basis Document for Level 1A Processing

    NASA Technical Reports Server (NTRS)

    Jester, Peggy L.; Hancock, David W., III

    2012-01-01

    The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.

  9. A two-level detection algorithm for optical fiber vibration

    NASA Astrophysics Data System (ADS)

    Bi, Fukun; Ren, Xuecong; Qu, Hongquan; Jiang, Ruiqing

    2015-09-01

    Optical fiber vibration is detected by the coherent optical time domain reflection technique. In addition to the vibration signals, the reflected signals include clutters and noises, which lead to a high false alarm rate. The "cell averaging" constant false alarm rate algorithm has a high computing speed, but its detection performance will be declined in nonhomogeneous environments such as multiple targets. The "order statistics" constant false alarm rate algorithm has a distinct advantage in multiple target environments, but it has a lower computing speed. An intelligent two-level detection algorithm is presented based on "cell averaging" constant false alarm rate and "order statistics" constant false alarm rate which work in serial way, and the detection speed of "cell averaging" constant false alarm rate and performance of "order statistics" constant false alarm rate are conserved, respectively. Through the adaptive selection, the "cell averaging" is applied in homogeneous environments, and the two-level detection algorithm is employed in nonhomogeneous environments. Our Monte Carlo simulation results demonstrate that considering different signal noise ratios, the proposed algorithm gives better detection probability than that of "order statistics".

  10. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  11. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  12. On the multi-level solution algorithm for Markov chains

    SciTech Connect

    Horton, G.

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  13. Level-treewidth property, exact algorithms and approximation schemes

    SciTech Connect

    Marathe, M.V.; Hunt, H.B.; Stearns, R.E.

    1997-06-01

    Informally, a class of graphs Q is said to have the level-treewidth property (LT-property) if for every G {element_of} Q there is a layout (breadth first ordering) L{sub G} such that the subgraph induced by the vertices in k-consecutive levels in the layout have treewidth O(f (k)), for some function f. We show that several important and well known classes of graphs including planar and bounded genus graphs, (r, s)-civilized graphs, etc, satisfy the LT-property. Building on the recent work, we present two general types of results for the class of graphs obeying the LT-property. (1) All problems in the classes MPSAT, TMAX and TMIN have polynomial time approximation schemes. (2) The problems considered in Eppstein have efficient polynomial time algorithms. These results can be extended to obtain polynomial time approximation algorithms and approximation schemes for a number of PSPACE-hard combinatorial problems specified using different kinds of succinct specifications studied in. Many of the results can also be extended to {delta}-near genus and {delta}-near civilized graphs, for any fixed {delta}. Our results significantly extend the work in and affirmatively answer recent open questions.

  14. Sampling design for classifying contaminant level using annealing search algorithms

    NASA Astrophysics Data System (ADS)

    Christakos, George; Killam, Bart R.

    1993-12-01

    A stochastic method for sampling spatially distributed contaminant level is presented. The purpose of sampling is to partition the contaminated region into zones of high and low pollutant concentration levels. In particular, given an initial set of observations of a contaminant within a site, it is desired to find a set of additional sampling locations in a way that takes into consideration the spatial variability characteristics of the site and optimizes certain objective functions emerging from the physical, regulatory and monetary considerations of the specific site cleanup process. Since the interest is in classifying the domain into zones above and below a pollutant threshold level, a natural criterion is the cost of misclassification. The resulting objective function is the expected value of a spatial loss function associated with sampling. Stochastic expectation involves the joint probability distribution of the pollutant level and its estimate, where the latter is calculated by means of spatial estimation techniques. Actual computation requires the discretization of the contaminated domain. As a consequence, any reasonably sized problem results in combinatorics precluding an exhaustive search. The use of an annealing algorithm, although suboptimal, can find a good set of future sampling locations quickly and efficiently. In order to obtain insight about the parameters and the computational requirements of the method, an example is discussed in detail. The implementation of spatial sampling design in practice will provide the model inputs necessary for waste site remediation, groundwater management, and environmental decision making.

  15. Constrained Multi-Level Algorithm for Trajectory Optimization

    NASA Astrophysics Data System (ADS)

    Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi

    The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in

  16. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  17. Level set algorithms comparison for multi-slice CT left ventricle segmentation

    NASA Astrophysics Data System (ADS)

    Medina, Ruben; La Cruz, Alexandra; Ordoñes, Andrés.; Pesántez, Daniel; Morocho, Villie; Vanegas, Pablo

    2015-12-01

    The comparison of several Level Set algorithms is performed with respect to 2D left ventricle segmentation in Multi-Slice CT images. Five algorithms are compared by calculating the Dice coefficient between the resulting segmentation contour and a reference contour traced by a cardiologist. The algorithms are also tested on images contaminated with Gaussian noise for several values of PSNR. Additionally an algorithm for providing the initialization shape is proposed. This algorithm is based on a combination of mathematical morphology tools with watershed and region growing algorithms. Results on the set of test images are promising and suggest the extension to 3{D MSCT database segmentation.

  18. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  19. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  20. A novel bit-level image encryption algorithm based on chaotic maps

    NASA Astrophysics Data System (ADS)

    Xu, Lu; Li, Zhi; Li, Jian; Hua, Wei

    2016-03-01

    Recently, a number of chaos-based image encryption algorithms have been proposed at the pixel level, but little research at the bit level has been conducted. This paper presents a novel bit-level image encryption algorithm that is based on piecewise linear chaotic maps (PWLCM). First, the plain image is transformed into two binary sequences of the same size. Second, a new diffusion strategy is introduced to diffuse the two sequences mutually. Then, we swap the binary elements in the two sequences by the control of a chaotic map, which can permute the bits in one bitplane into any other bitplane. The proposed algorithm has excellent encryption performance with only one round. The simulation results and performance analysis show that the proposed algorithm is both secure and reliable for image encryption.

  1. Multiphase permittivity imaging using absolute value electrical capacitance tomography data and a level set algorithm.

    PubMed

    Al Hosani, E; Soleimani, M

    2016-06-28

    Multiphase flow imaging is a very challenging and critical topic in industrial process tomography. In this article, simulation and experimental results of reconstructing the permittivity profile of multiphase material from data collected in electrical capacitance tomography (ECT) are presented. A multiphase narrowband level set algorithm is developed to reconstruct the interfaces between three- or four-phase permittivity values. The level set algorithm is capable of imaging multiphase permittivity by using one set of ECT measurement data, so-called absolute value ECT reconstruction, and this is tested with high-contrast and low-contrast multiphase data. Simulation and experimental results showed the superiority of this algorithm over classical pixel-based image reconstruction methods. The multiphase level set algorithm and absolute ECT reconstruction are presented for the first time, to the best of our knowledge, in this paper and critically evaluated. This article is part of the themed issue 'Supersensing through industrial process tomography'. PMID:27185966

  2. Algorithmic recognition of anomalous time intervals in sea-level observations

    NASA Astrophysics Data System (ADS)

    Getmanov, V. G.; Gvishiani, A. D.; Kamaev, D. A.; Kornilov, A. S.

    2016-03-01

    The problem of the algorithmic recognition of anomalous time intervals in the time series of the sea-level observations conducted by the Russian Tsunami Warning Survey (RTWS) is considered. The normal and anomalous sea-level observations are described. The polyharmonic models describing the sea-level fluctuations on the short time intervals are constructed, and sea-level forecasting based on these models is suggested. The algorithm for the recognition of anomalous time intervals is developed and its work is tested on the real RTWS data.

  3. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data

    PubMed Central

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes. PMID:26872146

  4. A new algorithm for idealizing single ion channel data containing multiple unknown conductance levels.

    PubMed Central

    VanDongen, A M

    1996-01-01

    A new algorithm is presented for idealizing single channel data containing any number of conductance levels. The number of levels and their amplitudes do not have to be known a priori. No assumption has to be made about the behavior of the channel, other than that transitions between conductance levels are fast. The algorithm is relatively insensitive to the complexity of the underlying single channel behavior. Idealization may be reliable with signal-to-noise ratios as low as 3.5. The idealization algorithm uses a slope detector to localize transitions between levels and a relative amplitude criterion to remove spurious transitions. After estimating the number of conductances and their amplitudes, conductance states can be assigned to the idealized levels. In addition to improving the quality of the idealization, this "interpretation" allows a statistical analysis of individual (sub)conductance states. PMID:8785286

  5. Single qudit realization of the Deutsch algorithm using superconducting many-level quantum circuits

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Fedorov, A. K.; Strakhov, A. A.; Man'ko, V. I.

    2015-07-01

    Design of a large-scale quantum computer has paramount importance for science and technologies. We investigate a scheme for realization of quantum algorithms using noncomposite quantum systems, i.e., systems without subsystems. In this framework, n artificially allocated "subsystems" play a role of qubits in n-qubits quantum algorithms. With focus on two-qubit quantum algorithms, we demonstrate a realization of the universal set of gates using a d = 5 single qudit state. Manipulation with an ancillary level in the systems allows effective implementation of operators from U(4) group via operators from SU(5) group. Using a possible experimental realization of such systems through anharmonic superconducting many-level quantum circuits, we present a blueprint for a single qudit realization of the Deutsch algorithm, which generalizes previously studied realization based on the virtual spin representation (Kessel et al., 2002 [9]).

  6. Evaluation of SMAP Level 2 Soil Moisture Algorithms Using SMOS Data

    NASA Technical Reports Server (NTRS)

    Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann; Shi, J. C.

    2011-01-01

    The objectives of the SMAP (Soil Moisture Active Passive) mission are global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolution, respectively. SMAP will provide soil moisture with a spatial resolution of 10 km with a 3-day revisit time at an accuracy of 0.04 m3/m3 [1]. In this paper we contribute to the development of the Level 2 soil moisture algorithm that is based on passive microwave observations by exploiting Soil Moisture Ocean Salinity (SMOS) satellite observations and products. SMOS brightness temperatures provide a global real-world, rather than simulated, test input for the SMAP radiometer-only soil moisture algorithm. Output of the potential SMAP algorithms will be compared to both in situ measurements and SMOS soil moisture products. The investigation will result in enhanced SMAP pre-launch algorithms for soil moisture.

  7. An adaptive multi-level simulation algorithm for stochastic biological systems

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.

    2015-01-01

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  8. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  9. An improved bi-level algorithm for partitioning dynamic grid hierarchies.

    SciTech Connect

    Deiterding, Ralf (California Institute of Technology, Pasadena, CA); Johansson, Henrik (Uppsala University, Uppsala, Sweden); Steensland, Johan; Ray, Jaideep

    2006-05-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as ''impossible'' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible bi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  10. An improved bi-level algorithm for partitioning dynamic structured grid hierarchies.

    SciTech Connect

    Deiterding, Ralf; Steensland, Johan; Ray, Jaideep

    2006-02-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as 'impossible' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible hi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  11. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  12. A Real-Time Algorithm for the Approximation of Level-Set-Based Curve Evolution

    PubMed Central

    Shi, Yonggang; Karl, William Clem

    2010-01-01

    In this paper, we present a complete and practical algorithm for the approximation of level-set-based curve evolution suitable for real-time implementation. In particular, we propose a two-cycle algorithm to approximate level-set-based curve evolution without the need of solving partial differential equations (PDEs). Our algorithm is applicable to a broad class of evolution speeds that can be viewed as composed of a data-dependent term and a curve smoothness regularization term. We achieve curve evolution corresponding to such evolution speeds by separating the evolution process into two different cycles: one cycle for the data-dependent term and a second cycle for the smoothness regularization. The smoothing term is derived from a Gaussian filtering process. In both cycles, the evolution is realized through a simple element switching mechanism between two linked lists, that implicitly represents the curve using an integer valued level-set function. By careful construction, all the key evolution steps require only integer operations. A consequence is that we obtain significant computation speedups compared to exact PDE-based approaches while obtaining excellent agreement with these methods for problems of practical engineering interest. In particular, the resulting algorithm is fast enough for use in real-time video processing applications, which we demonstrate through several image segmentation and video tracking experiments. PMID:18390371

  13. An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.

    PubMed

    Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song

    2015-09-01

    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775

  14. MODIS. Volume 2: MODIS level 1 geolocation, characterization and calibration algorithm theoretical basis document, version 1

    NASA Technical Reports Server (NTRS)

    Barker, John L.; Harnden, Joann M. K.; Montgomery, Harry; Anuta, Paul; Kvaran, Geir; Knight, ED; Bryant, Tom; Mckay, AL; Smid, Jon; Knowles, Dan, Jr.

    1994-01-01

    The EOS Moderate Resolution Imaging Spectrometer (MODIS) is being developed by NASA for flight on the Earth Observing System (EOS) series of satellites, the first of which (EOS-AM-1) is scheduled for launch in 1998. This document describes the algorithms and their theoretical basis for the MODIS Level 1B characterization, calibration, and geolocation algorithms which must produce radiometrically, spectrally, and spatially calibrated data with sufficient accuracy so that Global change research programs can detect minute changes in biogeophysical parameters. The document first describes the geolocation algorithm which determines geodetic latitude, longitude, and elevation of each MODIS pixel and the determination of geometric parameters for each observation (satellite zenith angle, satellite azimuth, range to the satellite, solar zenith angle, and solar azimuth). Next, the utilization of the MODIS onboard calibration sources, which consist of the Spectroradiometric Calibration Assembly (SRCA), Solar Diffuser (SD), Solar Diffuser Stability Monitor (SDSM), and the Blackbody (BB), is treated. Characterization of these sources and integration of measurements into the calibration process is described. Finally, the use of external sources, including the Moon, instrumented sites on the Earth (called vicarious calibration), and unsupervised normalization sites having invariant reflectance and emissive properties is treated. Finally, algorithms for generating utility masks needed for scene-based calibration are discussed. Eight appendices are provided, covering instrument design and additional algorithm details.

  15. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  16. Use of the particle swarm optimization algorithm for second order design of levelling networks

    NASA Astrophysics Data System (ADS)

    Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer

    2009-08-01

    The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.

  17. An algorithm for solving the system-level problem in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.

  18. A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.

    SciTech Connect

    Steensland, Johan; Ray, Jaideep

    2003-07-01

    This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In many cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.

  19. A shifting level model algorithm that identifies aberrations in array-CGH data.

    PubMed

    Magi, Alberto; Benelli, Matteo; Marseglia, Giuseppina; Nannetti, Genni; Scordo, Maria Rosaria; Torricelli, Francesca

    2010-04-01

    Array comparative genomic hybridization (aCGH) is a microarray technology that allows one to detect and map genomic alterations. The goal of aCGH analysis is to identify the boundaries of the regions where the number of DNA copies changes (breakpoint identification) and then to label each region as loss, neutral, or gain (calling). In this paper, we introduce a new algorithm, based on the shifting level model (SLM), with the aim of locating regions with different means of the log(2) ratio in genomic profiles obtained from aCGH data. We combine the SLM algorithm with the CGHcall calling procedure and compare their performances with 5 state-of-the-art methods. When dealing with synthetic data, our method outperforms the other 5 algorithms in detecting the change in the number of DNA copies in the most challenging situations. For real aCGH data, SLM is able to locate all the cytogenetically mapped aberrations giving a smaller number of false-positive breakpoints than the compared methods. The application of the SLM algorithm is not limited to aCGH data. Our approach can also be used for the analysis of several emerging experimental strategies such as high-resolution tiling array. PMID:19948744

  20. Weighted least-squares algorithm for phase unwrapping based on confidence level in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Shaohua; Yu, Jie; Yang, Cankun; Jiao, Shuai; Fan, Jun; Wan, Yanyan

    2015-12-01

    Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.

  1. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  2. ECG signal compression and classification algorithm with quad level vector for ECG holter system.

    PubMed

    Kim, Hyejung; Yazicioglu, Refet Firat; Merken, Patrick; Van Hoof, Chris; Yoo, Hoi-Jun

    2010-01-01

    An ECG signal processing method with quad level vector (QLV) is proposed for the ECG holter system. The ECG processing consists of the compression flow and the classification flow, and the QLV is proposed for both flows to achieve better performance with low-computation complexity. The compression algorithm is performed by using ECG skeleton and the Huffman coding. Unit block size optimization, adaptive threshold adjustment, and 4-bit-wise Huffman coding methods are applied to reduce the processing cost while maintaining the signal quality. The heartbeat segmentation and the R-peak detection methods are employed for the classification algorithm. The performance is evaluated by using the Massachusetts Institute of Technology-Boston's Beth Israel Hospital Arrhythmia Database, and the noise robust test is also performed for the reliability of the algorithm. Its average compression ratio is 16.9:1 with 0.641% percentage root mean square difference value and the encoding rate is 6.4 kbps. The accuracy performance of the R-peak detection is 100% without noise and 95.63% at the worst case with -10-dB SNR noise. The overall processing cost is reduced by 45.3% with the proposed compression techniques. PMID:19775975

  3. A conflict-free, path-level parallelization approach for sequential simulation algorithms

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.

    2015-07-01

    Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.

  4. MODIS calibration algorithm improvements developed for Collection 6 Level-1B

    NASA Astrophysics Data System (ADS)

    Wenny, Brian N.; Sun, Junqiang; Xiong, Xiaoxiong; Wu, Aisheng; Chen, Hongda; Angal, Amit; Choi, Taeyoung; Chen, Na; Madhavan, Sriharsha; Geng, Xu; Kuyper, James; Tan, Liqin

    2010-09-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) has been operating on both the Terra and Aqua spacecraft for over 10.5 and 8 years, respectively. Over 40 science products are generated routinely from MODIS Earth images and used extensively by the global science community for a wide variety of land, ocean, and atmosphere applications. Over the mission lifetime, several versions of the MODIS data set have been in use as the calibration and data processing algorithms evolved. Currently Version 5 MODIS data is the baseline Level-1B calibrated science product. The MODIS Characterization Support Team (MCST), with input from the MODIS Science Team, developed and delivered a number of improvements and enhancements to the calibration algorithms, Level-1B processing code and Look-up Tables for the Version 6 Level-1B MODIS data. Version 6 implements a number of changes in the calibration methodology for both the Reflective Solar Bands (RSB) and Thermal Emissive Bands (TEB). This paper describes the improvements introduced in Collection 6 to the RSB and TEB calibration and detector Quality Assurance (QA) handling.

  5. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  6. Utilization of PSO algorithm in estimation of water level change of Lake Beysehir

    NASA Astrophysics Data System (ADS)

    Buyukyildiz, Meral; Tezel, Gulay

    2015-12-01

    In this study, unlike backpropagation algorithm which gets local best solutions, the usefulness of particle swarm optimization (PSO) algorithm, a population-based optimization technique with a global search feature, inspired by the behavior of bird flocks, in determination of parameters of support vector machines (SVM) and adaptive network-based fuzzy inference system (ANFIS) methods was investigated. For this purpose, the performances of hybrid PSO-ɛ support vector regression (PSO-ɛSVR) and PSO-ANFIS models were studied to estimate water level change of Lake Beysehir in Turkey. The change in water level was also estimated using generalized regression neural network (GRNN) method, an iterative training procedure. Root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R 2) were used to compare the obtained results. Efforts were made to estimate water level change (L) using different input combinations of monthly inflow-lost flow (I), precipitation (P), evaporation (E), and outflow (O). According to the obtained results, the other methods except PSO-ANN generally showed significantly similar performances to each other. PSO-ɛSVR method with the values of minMAE = 0.0052 m, maxMAE = 0.04 m, and medianMAE = 0.0198 m; minRMSE = 0.0070 m, maxRMSE = 0.0518 m, and medianRMSE = 0.0241 m; minR 2 = 0.9169, maxR 2 = 0.9995, medianR 2 = 0.9909 for the I-P-E-O combination in testing period became superior in forecasting water level change of Lake Beysehir than the other methods. PSO-ANN models were the least successful models in all combinations.

  7. Optimal annealing schedules for two-, three-, and four-level systems using a genetic algorithm approach

    NASA Astrophysics Data System (ADS)

    White, Ronald P.; Mayne, Howard R.

    2000-05-01

    An annealing schedule, T(t), is the temperature as function of time whose goal is to bring a system from some initial low-order state to a final high-order state. We use the probability in the lowest energy level as the order parameter, so that an ideally annealed system would have all its population in its ground-state. We consider a model system comprised of discrete energy levels separated by activation barriers. We have carried out annealing calculations on this system for a range of system parameters. In particular, we considered the schedule as a function of the energy level spacing, of the height of the activation barriers, and, in some cases, as a function of degeneracies of the levels. For a given set of physical parameters, and maximum available time, tm, we were able to obtain the optimal schedule by using a genetic algorithm (GA) approach. For the two-level system, analytic solutions are available, and were compared with the GA-optimized results. The agreement was essentially exact. We were able to identify systematic behaviors of the schedules and trends in final probabilities as a function of parameters. We have also carried out Metropolis Monte Carlo (MMC) calculations on simple potential energy functions using the optimal schedules available from the model calculations. Agreement between the model and MMC calculations was excellent.

  8. TES Level 1 Algorithms: Interferogram Processing, Geolocation, Radiometric, and Spectral Calibration

    NASA Technical Reports Server (NTRS)

    Worden, Helen; Beer, Reinhard; Bowman, Kevin W.; Fisher, Brendan; Luo, Mingzhao; Rider, David; Sarkissian, Edwin; Tremblay, Denis; Zong, Jia

    2006-01-01

    The Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura satellite measures the infrared radiance emitted by the Earth's surface and atmosphere using Fourier transform spectrometry. The measured interferograms are converted into geolocated, calibrated radiance spectra by the L1 (Level 1) processing, and are the inputs to L2 (Level 2) retrievals of atmospheric parameters, such as vertical profiles of trace gas abundance. We describe the algorithmic components of TES Level 1 processing, giving examples of the intermediate results and diagnostics that are necessary for creating TES L1 products. An assessment of noise-equivalent spectral radiance levels and current systematic errors is provided. As an initial validation of our spectral radiances, TES data are compared to the Atmospheric Infrared Sounder (AIRS) (on EOS Aqua), after accounting for spectral resolution differences by applying the AIRS spectral response function to the TES spectra. For the TES L1 nadir data products currently available, the agreement with AIRS is 1 K or better.

  9. Status of the MODIS Level 1B Algorithms and Calibration Tables

    NASA Technical Reports Server (NTRS)

    Xiong, X; Salomonson, V V; Kuyper, J; Tan, L; Chiang, K; Sun, J; Barnes, W L

    2005-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) makes observations using 36 spectral bands with wavelengths from 0.41 to 14.4 m and nadir spatial resolutions of 0.25km, 0.5km, and 1km. It is currently operating onboard the NASA Earth Observing System (EOS) Terra and Aqua satellites, launched in December 1999 and May 2002, respectively. The MODIS Level 1B (L1B) program converts the sensor's on-orbit responses in digital numbers to radiometrically calibrated and geo-located data products for the duration of each mission. Its primary data products are top of the atmosphere (TOA) reflectance factors for the sensor's reflective solar bands (RSB) and TOA spectral radiances for the thermal emissive bands (TEB). The L1B algorithms perform the TEB calibration on a scan-by-scan basis using the sensor's response to the on-board blackbody (BB) and other parameters which are stored in Lookup Tables (LUTs). The RSB calibration coefficients are processed offline and regularly updated through LUTs. In this paper we provide a brief description of the MODIS L1B calibration algorithms and associated LUTs with emphasis on their recent improvements and updates developed for the MODIS collection 5 processing. We will also discuss sensor on-orbit calibration and performance issues that are critical to maintaining L1B data product quality, such as changes in the sensor's response versus scan-angle.

  10. Terra and Aqua moderate-resolution imaging spectroradiometer collection 6 level 1B algorithm

    NASA Astrophysics Data System (ADS)

    Toller, Gary; Xiong, Xiaoxiong; Sun, Junqiang; Wenny, Brian N.; Geng, Xu; Kuyper, James; Angal, Amit; Chen, Hongda; Madhavan, Sriharsha; Wu, Aisheng

    2013-01-01

    The moderate-resolution imaging spectroradiometer (MODIS) was launched on the Terra spacecraft on Dec.18, 1999 and on Aquaon May 4, 2002. The data acquired by these instruments have contributed to the long-term climate data record for more than a decade and represent a key component of NASA's Earth observing system. Each MODIS instrument observes nearly the whole Earth each day, enabling the scientific characterization of the land, ocean, and atmosphere. The MODIS Level 1B (L1B) algorithms input uncalibrated geo-located observations and convert instrument response into calibrated reflectance and radiance, which are used to generate science data products. The instrument characterization needed to run the L1B code is currently implemented using time-dependent lookup tables. The MODIS characterization support team, working closely with the MODIS Science Team, has improved the product quality with each data reprocessing. We provide an overview of the new L1B algorithm release, designated collection 6. Recent improvements made as a consequence of on-orbit calibration, on-orbit analyses, and operational considerations are described. Instrument performance and the expected impact of L1B changes on the collection 6 L1B products are discussed.

  11. Optimal Control of Population Transfer in Three-Level Λ System with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Yun; Sun, Zhen-Rong; Chen, Guo-Liang; Wang, Zu-Geng; Xü, Zhi-Zhan; Li, Ru-Xin

    2004-10-01

    Population transfer in a three-level Lambda system is simulated numerically and optimized. Almost complete population transfer from |1rangle to |3rangle is achieved by a genetic algorithm while the population in state |2rangle reached minimum over the entire evolution at the same time. The result shows that the optimal pulse sequence is the well-known stimulated Raman adiabatic passage (STIRAP) scheme. The detuning of pump pulse and Stokes pulse Deltap and Deltas with the opposite sign and the chirps chip and chis with the same sign are in favour of the complete and robust population transfer for few-cycle laser pulse. Rabi frequencies Omegap and Omegas have insensitive effects on the complete population transfer during a large scope of their ratio when they are large enough.

  12. Parallel of low-level computer vision algorithms on a multi-DSP system

    NASA Astrophysics Data System (ADS)

    Liu, Huaida; Jia, Pingui; Li, Lijian; Yang, Yiping

    2011-06-01

    Parallel hardware becomes a commonly used approach to satisfy the intensive computation demands of computer vision systems. A multiprocessor architecture based on hypercube interconnecting digital signal processors (DSPs) is described to exploit the temporal and spatial parallelism. This paper presents a parallel implementation of low level vision algorithms designed on multi-DSP system. The convolution operation has been parallelized by using redundant boundary partitioning. Performance of the parallel convolution operation is investigated by varying the image size, mask size and the number of processors. Experimental results show that the speedup is close to the ideal value. However, it can be found that the loading imbalance of processor can significantly affect the computation time and speedup of the multi- DSP system.

  13. An overview of the CATS level 1 processing algorithms and data products

    NASA Astrophysics Data System (ADS)

    Yorks, J. E.; McGill, M. J.; Palm, S. P.; Hlavka, D. L.; Selmer, P. A.; Nowottnick, E. P.; Vaughan, M. A.; Rodier, S. D.; Hart, W. D.

    2016-05-01

    The Cloud-Aerosol Transport System (CATS) is an elastic backscatter lidar that was launched on 10 January 2015 to the International Space Station (ISS). CATS provides both space-based technology demonstrations for future Earth Science missions and operational science measurements. This paper outlines the CATS Level 1 data products and processing algorithms. Initial results and validation data demonstrate the ability to accurately detect optically thin atmospheric layers with 1064 nm nighttime backscatter as low as 5.0E-5 km-1 sr-1. This sensitivity, along with the orbital characteristics of the ISS, enables the use of CATS data for cloud and aerosol climate studies. The near-real-time downlinking and processing of CATS data are unprecedented capabilities and provide data that have applications such as forecasting of volcanic plume transport for aviation safety and aerosol vertical structure that will improve air quality health alerts globally.

  14. A reliable energy-efficient multi-level routing algorithm for wireless sensor networks using fuzzy Petri nets.

    PubMed

    Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C

    2011-01-01

    A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802

  15. A Reliable Energy-Efficient Multi-Level Routing Algorithm for Wireless Sensor Networks Using Fuzzy Petri Nets

    PubMed Central

    Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C.

    2011-01-01

    A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802

  16. SMOS/SMAP Synergy for SMAP Level 2 Soil Moisture Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann

    2011-01-01

    ancillary data) were used to correct for surface temperature effects and to derive microwave emissivity. ECMWF data were also used for precipitation forecasts, presence of snow, and frozen ground. Vegetation options are described below. One year of soil moisture observations from a set of four watersheds in the U.S. were used to evaluate four different retrieval methodologies: (1) SMOS soil moisture estimates (version 400), (2) SeA soil moisture estimates using the SMOS/SMAP data with SMOS estimated vegetation optical depth, which is part of the SMOS level 2 product, (3) SeA soil moisture estimates using the SMOS/SMAP data and the MODIS-based vegetation climatology data, and (4) SeA soil moisture estimates using the SMOS/SMAP data and actual MODIS observations. The use of SMOS real-world global microwave observations and the analyses described here will help in the development and selection of different land surface parameters and ancillary observations needed for the SMAP soil moisture algorithms. These investigations will greatly improve the quality and reliability of this SMAP product at launch.

  17. Testing a real-time algorithm for the detection of tsunami signals on sea-level records

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.; Titov, V.

    2009-04-01

    One of the important tasks for the implementation of a tsunami warning system in the Mediterranean Sea is to develop a real-time detection algorithm. Unlike the Mediterranean Sea situation, tsunamis happen quite often in the Pacific Ocean and they have been historically recorded with a proper sampling rate. A large database of tsunami records is therefore available for the Pacific. The Tsunami Research Team of the University of Bologna is developing a real-time detection algorithm on synthetic records. Thanks to the collaboration with NCTR of PMEL/NOAA (NOAA Center for Tsunami Research of Pacific and Marine Environmental Laboratory/National Oceanic and Atmospheric Administration), it has been possible to test this algorithm on specific events recorded by Adak Island tide-gage, in Alaska, and by DART buoys, located offshore Alaska. This work has been undertaken in the framework of the Italian national project DPC-INGV S3. The detection algorithm has the goal to discriminate the first tsunami wave from the previous background signal. Shortly, the algorithm is built on a parameter based on the standard deviation of the signal calculated on a short time window and on its comparison with its computed prediction through a control function. The control function indicates a tsunami detection whenever it exceeds a certain threshold. The algorithm was calibrated and tested both on coastal tide-gages and on offshore buoys that measure sea-level changes. Its calibration presents different issues if the algorithm has to be implemented on an offshore buoy or on a coastal tide-gage. In particular, the algorithm parameters are site-specific for coastal sea-level signals, because sea-level changes are here mainly characterized by oscillations induced by the coastal topography. Adak Island background signal was analyzed and the algorithm parameters were set: It was found that there is a persistent presence of seiches with periods in the tsunami range, to which the algorithm is also

  18. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  19. Level 3 trigger algorithm and Hardware Platform for the HADES experiment

    NASA Astrophysics Data System (ADS)

    Kirschner, Daniel Georg; Agakishiev, Geydar; Liu, Ming; Perez, Tiago; Kühn, Wolfgang; Pechenov, Vladimir; Spataro, Stefano

    2009-01-01

    A next generation real time trigger method to improve the enrichment of lepton events in the High Acceptance DiElectron Spectrometer (HADES) trigger system has been developed. In addition, a flexible Hardware Platform (Gigabit Ethernet-Multi-Node, GE-MN) was developed to implement and test the trigger method. The trigger method correlates the ring information of the HADES Ring Imaging Cherenkov (RICH) detector with the fired wires (drift cells) of the HADES Mini Drift Chamber (MDC) detector. It is demonstrated that this Level 3 trigger method can enhance the number of events which contain leptons by a factor of up to 50 at efficiencies above 80%. The performance of the correlation method in terms of the events analyzed per second has been studied with the GE-MN prototype in a lab test setup by streaming previously recorded experiment data to the module. This paper is a compilation from Kirschner [Level 3 trigger algorithm and Hardware Platform for the HADES experiment, Ph.D. Thesis, II. Physikalisches Institut der Justus-Liebig-Universität Gießen, urn:nbn:de:hebis:26-opus-50784, October 2007 [1

  20. Intelligence System for Diagnosis Level of Coronary Heart Disease with K-Star Algorithm

    PubMed Central

    Kusnanto, Hari; Herianto, Herianto

    2016-01-01

    Objectives Coronary heart disease is the leading cause of death worldwide, and it is important to diagnose the level of the disease. Intelligence systems for diagnosis proved can be used to support diagnosis of the disease. Unfortunately, most of the data available between the level/type of coronary heart disease is unbalanced. As a result system performance is low. Methods This paper proposes an intelligence systems for the diagnosis of the level of coronary heart disease taking into account the problem of data imbalance. The first stage of this research was preprocessing, which included resampled non-stratified random sampling (R), the synthetic minority over-sampling technique (SMOTE), clean data out of range attribute (COR), and remove duplicate (RD). The second step was the sharing of data for training and testing using a k-fold cross-validation model and training multiclass classification by the K-star algorithm. The third step was performance evaluation. The proposed system was evaluated using the performance parameters of sensitivity, specificity, positive prediction value (PPV), negative prediction value (NPV), area under the curve (AUC) and F-measure. Results The results showed that the proposed system provides an average performance with sensitivity of 80.1%, specificity of 95%, PPV of 80.1%, NPV of 95%, AUC of 87.5%, and F-measure of 80.1%. Performance of the system without consideration of data imbalance provide showed sensitivity of 53.1%, specificity of 88,3%, PPV of 53.1%, NPV of 88.3%, AUC of 70.7%, and F-measure of 53.1%. Conclusions Based on these results it can be concluded that the proposed system is able to deliver good performance in the category of classification. PMID:26893948

  1. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  2. A level 2 wind speed retrieval algorithm for the CYGNSS mission

    NASA Astrophysics Data System (ADS)

    Clarizia, Maria Paola; Ruf, Christopher; O'Brien, Andrew; Gleason, Scott

    2014-05-01

    The NASA EV-2 Cyclone Global Navigation Satellite System (CYGNSS) is a spaceborne mission focused on tropical cyclone (TC) inner core process studies. CYGNSS consists of a constellation of 8 microsatellites, which will measure ocean surface wind speed in all precipitating conditions, including those experienced in the TC eyewall, and with sufficient frequency to resolve genesis and rapid intensification. It does so through the use of an innovative remote sensing technique, known as Global Navigation Satellite System-Reflectometry, or GNSS-R. GNSS-R uses signals of opportunity from navigation constellations (e.g. GPS, GLONASS, Galileo), scattered by the surface of the ocean, to retrieve the surface wind speed. The dense space-time sampling capabilities, the ability of L-band signals to penetrate well through rain, and the possibility of simple, low-cost/low-power GNSS receivers, make GNSS-R ideal for the CYGNSS goals. Here we present an overview of a Level 2 (L2) wind speed retrieval algorithm, which would be particularly suitable for CYGNSS, and could be used to estimate winds from GNSS-R in general. The approach makes use of two different observables computed from 1-second Level 2a (L2a) delay-Doppler Maps (DDMs) of radar cross section. The first observable is called Delay-Doppler Map Average (DDMA), and it's the averaged radar cross section over a delay-Doppler window around the DDM peak (i.e. the specular reflection point coordinate in delay and Doppler). The second is called the Leading Edge Slope (LES), and it's the leading edge of the Integrated Delay Waveform (IDW), obtained by integrating the DDM along the Doppler dimension. The observables are calculated over a limited range of delays and Doppler frequencies, to comply with baseline spatial resolution requirements for the retrieved winds, which in the case of CYGNSS is 25 km x 25 km. If the observable from the 1-second DDM corresponds to a resolution higher than the specified one, time-averaging between

  3. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  4. Development of an algorithm to meaningfully interpret patterns in street-level methane concentrations

    NASA Astrophysics Data System (ADS)

    von Fischer, Joseph; Salo, Jessica; Griebenow, Claire; Bischak, Linde; Cooley, Daniel; Ham, Jay; Schumacher, Russ

    2013-04-01

    Methane (CH4) is an important greenhouse gas that has 70x greater heat forcing per molecule than CO2 over its ~10 year atmospheric residence time. Given this short residence time, there has been a surge of interest in mitigating anthropogenic CH4 sources because they will have a more immediate effect on warming rates. Recent observations of CH4 concentrations around the city of Boston reveal that natural gas distribution systems can have a very large number of leaks. However, there are a number of conceptual and practical challenges associated with interpretation of CH4 data gathered by car at the street level. In this presentation, we detail our efforts to develop an "algorithm" or set of standard practices for interpreting these patterns based on our own findings. At the most basic, we have evaluated approaches for vehicle driving patterns and management of the raw data. We also identify techniques for evaluating data quality and discerning when elevated CH4 may be due to other vehicles (e.g., CNG-powered city buses). We then compare methods for identifying "peaks" in CH4 concentration, and we discuss several approaches for relating concentration, space and wind data to emission rates. Finally, we provide some considerations for how the data from individual peaks might be aggregated to larger spatial scales.

  5. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen M.; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled %22Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations.%22 During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel

  6. The Multi Level Multi Domain (MLMD) method: a semi-implicit adaptive algorithm for Particle In Cell plasma simulations

    NASA Astrophysics Data System (ADS)

    Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni

    2013-10-01

    Particle in Cell (PIC) simulations of plasmas are not bound anymore by the stability constraints of explicit algorithms. Semi implicit and fully implicit methods allow to use larger grid spacings and time steps. Adaptive Mesh Refinement (AMR) techniques permit to locally change the simulation resolution. The code proposed in Innocenti et al., 2013 and Beck et al., 2013 is however the first to combine the advantages of both. The use of the Implicit Moment Method allows to taylor the resolution used in each level to the physical scales of interest and to use high Refinement Factors (RF) between the levels. The Multi Level Multi Domain (MLMD) structure, where all levels are simulated as complete domains, conjugates algorithmic and practical advantages. The different levels evolve according to the local dynamics and achieve optimal level interlocking. Also, the capabilities of the Object Oriented programming model are fully exploited. The MLMD algorithm is demonstrated with magnetic reconnection and collisionless shocks simulations with very high RFs between the levels. Notable computational gains are achieved with respect to simulations performed on the entire domain with the higher resolution. Beck A. et al. (2013). submitted. Innocenti M. E. et al. (2013). JCP, 238(0):115-140.

  7. Initial condition for efficient mapping of level set algorithms on many-core architectures

    NASA Astrophysics Data System (ADS)

    Tornai, Gábor János; Cserey, György

    2014-12-01

    In this paper, we investigated the effect of adding more small curves to the initial condition which determines the required number of iterations of a fast level set (LS) evolution. As a result, we discovered two new theorems and developed a proof on the worst case of the required number of iterations. Furthermore, we found that these kinds of initial conditions fit well to many-core architectures. To show this, we have included two case studies which are presented on different platforms. One runs on a graphical processing unit (GPU) and the other is executed on a cellular nonlinear network universal machine (CNN-UM). With the new initial conditions, the steady-state solutions of the LS are reached in less than eight iterations depending on the granularity of the initial condition. These dense iterations can be calculated very quickly on many-core platforms according to the two case studies. In the case of the proposed dense initial condition on GPU, there is a significant speedup compared to the sparse initial condition in all cases since our dense initial condition together with the algorithm utilizes the properties of the underlying architecture. Therefore, greater performance gain can be achieved (up to 18 times speedup compared to the sparse initial condition on GPU). Additionally, we have validated our concept against numerically approximated LS evolution of standard flows (mean curvature, Chan-Vese, geodesic active regions). The dice indexes between the fast LS evolutions and the evolutions of the numerically approximated partial differential equations are in the range of 0.99±0.003.

  8. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows

    PubMed Central

    Bieberle, M.; Hampel, U.

    2015-01-01

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. PMID:25939623

  9. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    PubMed

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. PMID:25939623

  10. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  11. A component-level failure detection and identification algorithm based on open-loop and closed-loop state estimators

    NASA Astrophysics Data System (ADS)

    You, Seung-Han; Cho, Young Man; Hahn, Jin-Oh

    2013-04-01

    This study presents a component-level failure detection and identification (FDI) algorithm for a cascade mechanical system subsuming a plant driven by an actuator unit. The novelty of the FDI algorithm presented in this study is that it is able to discriminate failure occurring in the actuator unit, the sensor measuring the output of the actuator unit, and the plant driven by the actuator unit. The proposed FDI algorithm exploits the measurement of the actuator unit output together with its estimates generated by open-loop (OL) and closed-loop (CL) estimators to enable FDI at the component's level. In this study, the OL estimator is designed based on the system identification of the actuator unit. The CL estimator, which is guaranteed to be stable against variations in the plant, is synthesized based on the dynamics of the entire cascade system. The viability of the proposed algorithm is demonstrated using a hardware-in-the-loop simulation (HILS), which shows that it can detect and identify target failures reliably in the presence of plant uncertainties.

  12. Comparing Learning Performance of Students Using Algorithm Visualizations Collaboratively on Different Engagement Levels

    ERIC Educational Resources Information Center

    Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari

    2009-01-01

    In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…

  13. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  14. Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results

    NASA Astrophysics Data System (ADS)

    Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc

    2013-12-01

    Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.

  15. Automatic Localization of Target Vertebrae in Spine Surgery: Clinical Evaluation of the LevelCheck Registration Algorithm

    PubMed Central

    Lo, Sheng-fu L.; Otake, Yoshito; Puvanesarajah, Varun; Wang, Adam S.; Uneri, Ali; De Silva, Tharindu; Vogt, Sebastian; Kleinszig, Gerhard; Elder, Benjamin D; Goodwin, C. Rory; Kosztowski, Thomas A.; Liauw, Jason A.; Groves, Mari; Bydon, Ali; Sciubba, Daniel M.; Witham, Timothy F.; Wolinsky, Jean-Paul; Aygun, Nafi; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-01-01

    Study Design A 3D-2D image registration algorithm, “LevelCheck,” was used to automatically label vertebrae in intraoperative mobile radiographs obtained during spine surgery. Accuracy, computation time, and potential failure modes were evaluated in a retrospective study of 20 patients. Objective To measurethe performance of the LevelCheck algorithm using clinical images acquired during spine surgery. Summary of Background Data In spine surgery, the potential for wrong level surgery is significant due to the difficulty of localizing target vertebrae based solely on visual impression, palpation, and fluoroscopy. To remedy this difficulty and reduce the risk of wrong-level surgery, our team introduced a program (dubbed LevelCheck) to automatically localize target vertebrae in mobile radiographs using robust 3D-2D image registration to preoperative CT. Methods Twenty consecutive patients undergoing thoracolumbar spine surgery, for whom both a preoperative CT scan and an intraoperative mobile radiograph were available, were retrospectively analyzed. A board-certified neuroradiologist determined the “true” vertebra levels in each radiograph. Registration of the preoperative CT to the intraoperative radiographwere calculated via LevelCheck, and projection distance errors were analyzed. Five hundred random initializations were performed for eachpatient, andalgorithm settings (viz., the number of robust multi-starts, ranging 50 to 200) were varied to evaluate the tradeoff between registration error and computation time. Failure mode analysis was performed by individually analyzing unsuccessful registrations (>5 mm distance error) observed with 50 multi-starts. Results At 200 robust multi-starts (computation time of ∼26 seconds), the registration accuracy was 100% across all 10,000 trials. As the number of multi-starts (and computation time) decreased, the registration remained fairly robust, down to 99.3% registration accuracy at 50 multi-starts (computation time

  16. Improvements in dark water, low light-level AOD retrievals in MISR operational algorithm

    NASA Astrophysics Data System (ADS)

    Witek, M. L.; Diner, D. J.; Garay, M. J.; Xu, F.

    2015-12-01

    Satellite remote sensing of aerosols is taking bold steps towards higher spatial resolutions, as evidenced by the newly released MODIS 3 km product and the soon to be released MISR 4.4 km product. Finer horizontal resolution allows for a better aerosol characterization in proximity to clouds—which is important for studying indirect aerosol effects—but also poses additional challenges due to various cloud artifact effects. It is therefore imperative to refine satellite algorithms to correctly interpret aerosol behavior in the proximity of clouds. For instance, MISR aerosol optical depth (AOD) retrievals frequently overestimate AODs in pristine oceanic areas, in particular close to Antarctica, as evidenced by comparison with Maritime Aerosol Network (MAN) observations. We trace the origin of this overestimation to stray light, or veiling light, being scattered more or less uniformly over the camera's field of view and reducing the contrast of the primary image. We found that the MISR-MODIS radiance difference in dark areas correlates with average scene brightness within the whole MISR camera field of view. A simple, single parameter model is proposed to effect the corrections. Collocated MISR/MODIS pixels are used to fit the parameter in the MISR nadir camera. For the off-nadir cameras two alternative approaches are employed that are based on MISR radiances and radiative transfer model calculations. These two methods are prone to higher uncertainties, but suggest somewhat increasing correction values for the longer focal length cameras. Finally, the empirical corrections applied in the operational MISR retrieval algorithm substantially decrease AODs in analyzed cases, and lead to closer agreement with MAN and MODIS, proving the efficacy of the developed procedure.

  17. On-line algorithm for ground-level ozone prediction with a mobile station

    NASA Astrophysics Data System (ADS)

    Kocijan, Juš; Gradišar, Dejan; Božnar, Marija Zlata; Grašič, Boštjan; Mlakar, Primož

    2016-04-01

    It is important to be able to predict high concentrations of tropospheric ozone and to inform the population about any violations of air-quality standards, as defined by international regulations. Although first-principle models that cover large geographical regions and different atmospheric layers are improving constantly, they typically still only cover geographical regions with a relatively low resolution. Such model predictions can be problematic for the micro-locations of a complex terrain, i.e., a terrain with a large geographical diversity or urban terrain. For such micro-locations, statistical models can be utilised. This paper presents a modelling and prediction algorithm that can be used in, or in accordance with, a mobile air-quality measurement station. Such a mobile station would enable the set-up of a statistical model and a relatively rapid access to the model's predictions for a specific geographical micro-location without a large quantity of historical of measurements. Uncertainty information about the model's predictions is also usually required. In addition, such a model can adapt to long-term changes, such as climate changes. In the paper we propose Gaussian-process models for the described modelling and prediction. In particular, we selected evolving Gaussian-process models that update on-line with the incoming measurement data. The proposed algorithm for the mobile air-quality measurement and the forecasting station is evaluated on measurements from five locations in Slovenia with different topographical and geographical properties. The obtained evaluation results confirm the feasibility of the concept.

  18. Mathematical model and calculation algorithm of micro and meso levels of separation process of gaseous mixtures in molecular sieves

    SciTech Connect

    Umarova, Zhanat; Botayeva, Saule; Yegenova, Aliya; Usenova, Aisaule

    2015-05-15

    In the given article, the main thermodynamic aspects of the issue of modeling diffusion transfer in molecular sieves have been formulated. Dissipation function is used as a basic notion. The differential equation, connecting volume flow with the change of the concentration of catchable component has been derived. As a result, the expression for changing the concentration of the catchable component and the coefficient of membrane detecting has been received. As well, the system approach to describing the process of gases separation in ultra porous membranes has been realized and micro and meso-levels of mathematical modeling have been distinguished. The non-ideality of the shared system is primarily taken into consideration at the micro-level and the departure from the diffusion law of Fick has been taken into account. The calculation method of selectivity considering fractal structure of membranes has been developed at the meso level. The calculation algorithm and its software implementation have been suggested.

  19. Mathematical model and calculation algorithm of micro and meso levels of separation process of gaseous mixtures in molecular sieves

    NASA Astrophysics Data System (ADS)

    Umarova, Zhanat; Botayeva, Saule; Yegenova, Aliya; Usenova, Aisaule

    2015-05-01

    In the given article, the main thermodynamic aspects of the issue of modeling diffusion transfer in molecular sieves have been formulated. Dissipation function is used as a basic notion. The differential equation, connecting volume flow with the change of the concentration of catchable component has been derived. As a result, the expression for changing the concentration of the catchable component and the coefficient of membrane detecting has been received. As well, the system approach to describing the process of gases separation in ultra porous membranes has been realized and micro and meso-levels of mathematical modeling have been distinguished. The non-ideality of the shared system is primarily taken into consideration at the micro-level and the departure from the diffusion law of Fick has been taken into account. The calculation method of selectivity considering fractal structure of membranes has been developed at the meso level. The calculation algorithm and its software implementation have been suggested.

  20. The Level 2 research product algorithms for the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES)

    NASA Astrophysics Data System (ADS)

    Baron, P.; Urban, J.; Sagawa, H.; Möller, J.; Murtagh, D. P.; Mendrok, J.; Dupuy, E.; Sato, T. O.; Ochiai, S.; Suzuki, K.; Manabe, T.; Nishibori, T.; Kikuchi, K.; Sato, R.; Takayanagi, M.; Murayama, Y.; Shiotani, M.; Kasai, Y.

    2011-06-01

    This paper describes the algorithms of the level-2 research (L2r) processing chain developed for the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES). The chain has been developed in parallel to the operational chain for conducting researches on calibration and retrieval algorithms. L2r chain products are available to the scientific community. The objective of version 2 is the retrieval of the vertical distribution of trace gases in the altitude range of 18-90 km. An theoretical error analysis is conducted to estimate the retrieval feasibility of key parameters of the processing: line-of-sight elevation tangent altitudes (or angles), temperature and O3 profiles. The line-of-sight tangent altitudes are retrieved between 20 and 50 km from the strong ozone (O3) line at 625.371 GHz, with low correlation with the O3 volume-mixing ratio and temperature retrieved profiles. Neglecting the non-linearity of the radiometric gain in the calibration procedure is the main systematic error. It is large for the retrieved temperature (between 5-10 K). Therefore, atmospheric pressure can not be derived from the retrieved temperature, and, then, in the altitude range where the line-of-sight tangent altitudes are retrieved, the retrieved trace gases profiles are found to be better represented on pressure levels than on altitude levels. The error analysis for the retrieved HOCl profile demonstrates that best results for inverting weak lines can be obtained by using narrow spectral windows. Future versions of the L2r algorithms will improve the temperature/pressure retrievals and also provide information in the upper tropospheric/lower stratospheric region (e.g., water vapor, ice content, O3) and on stratospheric and mesospheric line-of-sight winds.

  1. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), III. Re-computed MS and mb, proxy MW, final magnitude composition and completeness assessment

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James

    2015-02-01

    This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the

  2. Mathematical and system level HW description DSP algorithms modeling investigation in an experimental 100G optical coherent system

    NASA Astrophysics Data System (ADS)

    Ribeiro, Vitor B.; Silva, Flávio A.; Oliveira, Julio C. R. F.; Franz, Lucas V.; Schneider, Eduardo O.; Moretti, Cleber; Ranzini, Stenio M.

    2013-01-01

    Today and next generation optical coherent systems rely more and more in DSP algorithms to improve capacity, spectral efficiency and fiber impairments mitigation. The amount of signal processing is remarkable, and because of that ASICs are preferable in order to comply with cost, power consumption and size, required in OIF 100G optical module standards. One important step in the ASIC development process is the validation of the DSP algorithms mathematical models in a high level language that consider HW characteristics and constrains. In this work we present, compare and evaluate in experimental data the mathematical model developed in Matlab and the SystemC model developed in C++. The DSP algorithms functionalities implemented were orthonormalization, CD equalizer, clock recovery, dynamic equalizer, frequency offset and phase estimation. The SystemC model considers clock signals, reset/enable structures, parallelization, finite fixed-point operations and structures that are closer to the ASIC HW implementation; due to these restrictions the performance is not as good as the mathematical modeling. The DSP algorithms models are evaluated in two 112 Gbit/s DP-QPSK experimental scenarios. In the first scenario the models are evaluated in back-to-back with ASE noise loading; in the second scenario the models are compared in a 226km optical fiber recirculation loop, with 80x112 Gbit/s DP-QPSK channels (8.96 Tbit/s). In the back-to-back experiment the OSNR penalty from the mathematical model to the SystemC model is only 1,0dB and in the recirculation loop the maximum reach is 2,600 km and 2,200 km for the Matlab and SystemC models respectively.

  3. Improving protein identification from peptide mass fingerprinting through a parameterized multi-level scoring algorithm and an optimized peak detection.

    PubMed

    Gras, R; Müller, M; Gasteiger, E; Gay, S; Binz, P A; Bienvenut, W; Hoogland, C; Sanchez, J C; Bairoch, A; Hochstrasser, D F; Appel, R D

    1999-12-01

    We have developed a new algorithm to identify proteins by means of peptide mass fingerprinting. Starting from the matrix-assisted laser desorption/ionization-time-of-flight (MALDI-TOF) spectra and environmental data such as species, isoelectric point and molecular weight, as well as chemical modifications or number of missed cleavages of a protein, the program performs a fully automated identification of the protein. The first step is a peak detection algorithm, which allows precise and fast determination of peptide masses, even if the peaks are of low intensity or they overlap. In the second step the masses and environmental data are used by the identification algorithm to search in protein sequence databases (SWISS-PROT and/or TrEMBL) for protein entries that match the input data. Consequently, a list of candidate proteins is selected from the database, and a score calculation provides a ranking according to the quality of the match. To define the most discriminating scoring calculation we analyzed the respective role of each parameter in two directions. The first one is based on filtering and exploratory effects, while the second direction focuses on the levels where the parameters intervene in the identification process. Thus, according to our analysis, all input parameters contribute to the score, however with different weights. Since it is difficult to estimate the weights in advance, they have been computed with a generic algorithm, using a training set of 91 protein spectra with their environmental data. We tested the resulting scoring calculation on a test set of ten proteins and compared the identification results with those of other peptide mass fingerprinting programs. PMID:10612280

  4. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  5. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  6. The Level 2 research product algorithms for the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES)

    NASA Astrophysics Data System (ADS)

    Baron, P.; Urban, J.; Sagawa, H.; Möller, J.; Murtagh, D. P.; Mendrok, J.; Dupuy, E.; Sato, T. O.; Ochiai, S.; Suzuki, K.; Manabe, T.; Nishibori, T.; Kikuchi, K.; Sato, R.; Takayanagi, M.; Murayama, Y.; Shiotani, M.; Kasai, Y.

    2011-10-01

    This paper describes the algorithms of the level-2 research (L2r) processing chain developed for the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES). The chain has been developed in parallel to the operational chain for conducting researches on calibration and retrieval algorithms. L2r chain products are available to the scientific community. The objective of version 2 is the retrieval of the vertical distribution of trace gases in the altitude range of 18-90 km. A theoretical error analysis is conducted to estimate the retrieval feasibility of key parameters of the processing: line-of-sight elevation tangent altitudes (or angles), temperature and ozone profiles. While pointing information is often retrieved from molecular oxygen lines, there is no oxygen line in the SMILES spectra, so the strong ozone line at 625.371 GHz has been chosen. The pointing parameters and the ozone profiles are retrieved from the line wings which are measured with high signal to noise ratio, whereas the temperature profile is retrieved from the optically thick line center. The main systematic component of the retrieval error was found to be the neglect of the non-linearity of the radiometric gain in the calibration procedure. This causes a temperature retrieval error of 5-10 K. Because of these large temperature errors, it is not possible to construct a reliable hydrostatic pressure profile. However, as a consequence of the retrieval of pointing parameters, pressure induced errors are significantly reduced if the retrieved trace gas profiles are represented on pressure levels instead of geometric altitude levels. Further, various setups of trace gas retrievals have been tested. The error analysis for the retrieved HOCl profile demonstrates that best results for inverting weak lines can be obtained by using narrow spectral windows.

  7. Application of Genetic Algorithm in the Modeling of Leaf Chlorophyll Level Based on Vis/Nir Reflection Spectroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Haiqing; Yang, Haiqing; He, Yong

    In order to detect leaf chlorophyll level nondestructively and instantly, VIS/NIR reflection spectroscopy technique was examined. In the test, 70 leaf samples were collected for model calibration and another 50 for model verification. Each leaf sample was optically measured by USB4000, a modular spectrometer. By the observation of spectral curves, the spectral range between 650nm and 750nm was found significant for mathematic modeling of leaf chlorophyll level. SPAD-502 meter was used for chemometrical measurement of leaf chlorophyll value. In the test, it was found necessary to put leaf thickness into consideration. The procedure of shaping the prediction model is as follows: First, leaf chlorophyll level prediction equation was created with uncertain parameters. Second, a genetic algorithm was programmed by Visual Basic 6.0 for parameter optimization. As the result of the calculation, the optimal spectral range was narrowed within 683.24nm and 733.91nm. Compared with the R2=0.2309 for calibration set and R2=0.5675 for on the spectral modeling is significant: the R2 of calibration set and verification set has been improved as high as 0.8658 and 0.9161 respectively. The test showed that it is practical to use VIS/NIR reflection spectrometer for the quantitative determination of leaf chlorophyll level.

  8. Document-level classification of CT pulmonary angiography reports based on an extension of the ConText algorithm.

    PubMed

    Chapman, Brian E; Lee, Sean; Kang, Hyunseok Peter; Chapman, Wendy W

    2011-10-01

    In this paper we describe an application called peFinder for document-level classification of CT pulmonary angiography reports. peFinder is based on a generalized version of the ConText algorithm, a simple text processing algorithm for identifying features in clinical report documents. peFinder was used to answer questions about the disease state (pulmonary emboli present or absent), the certainty state of the diagnosis (uncertainty present or absent), the temporal state of an identified pulmonary embolus (acute or chronic), and the technical quality state of the exam (diagnostic or not diagnostic). Gold standard answers for each question were determined from the consensus classifications of three human annotators. peFinder results were compared to naive Bayes' classifiers using unigrams and bigrams. The sensitivities (and positive predictive values) for peFinder were 0.98(0.83), 0.86(0.96), 0.94(0.93), and 0.60(0.90) for disease state, quality state, certainty state, and temporal state respectively, compared to 0.68(0.77), 0.67(0.87), 0.62(0.82), and 0.04(0.25) for the naive Bayes' classifier using unigrams, and 0.75(0.79), 0.52(0.69), 0.59(0.84), and 0.04(0.25) for the naive Bayes' classifier using bigrams. PMID:21459155

  9. Low-level radio-frequency interference detection algorithm based on European Centre for Medium-Range Weather Forecasting for Soil Moisture and Ocean Salinity

    NASA Astrophysics Data System (ADS)

    Lu, Hailiang; Li, Qingxia; Li, Yan; Li, Yinan; Li, Hao

    2015-01-01

    At present, the Soil Moisture and Ocean Salinity (SMOS) mission is severely affected by radio frequency interferences (RFIs), and the detection of low-level RFI-contamination brightness temperatures (BTs) is still a challenge in SMOS. A low-level RFI detection algorithm is proposed, which is based on the soil surface temperature products provided by the European Centre for Medium-Range Weather Forecasting. The algorithm is analyzed in terms of RFI-flagged snapshot, RFI-flagged probability, and localization accuracy. The performance of the algorithm is demonstrated by SMOS data. The results show this algorithm can detect and flag more low-level RFI-contamination BTs and show a better performance.

  10. A cascadic monotonic time-discretized algorithm for finite-level quantum control computation

    NASA Astrophysics Data System (ADS)

    Ditz, P.; Borzi`, A.

    2008-03-01

    A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes. Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals. Program summaryProgram title: CNMS Catalogue identifier: ADEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 770 No. of bytes in distributed program, including test data, etc.: 7098 Distribution format: tar.gz Programming language: MATLAB 6 Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAM Operating system: Microsoft Windows XP Word size: 32 Classification: 4.9 Nature of problem: Quantum control Solution method: Iterative Running time: 60-600 sec

  11. Gray level co-occurrence and random forest algorithm-based gender determination with maxillary tooth plaster images.

    PubMed

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2016-06-01

    Gender is one of the intrinsic properties of identity, with performance enhancement reducing the cluster when a search is performed. Teeth have durable and resistant structure, and as such are important sources of identification in disasters (accident, fire, etc.). In this study, gender determination is accomplished by maxillary tooth plaster models of 40 people (20 males and 20 females). The images of tooth plaster models are taken with a lighting mechanism set-up. A gray level co-occurrence matrix of the image with segmentation is formed and classified via a Random Forest (RF) algorithm by extracting pertinent features of the matrix. Automatic gender determination has a 90% success rate, with an applicable system to determine gender from maxillary tooth plaster images. PMID:27104495

  12. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    PubMed

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591

  13. Testing Nelder-Mead Based Repulsion Algorithms for Multiple Roots of Nonlinear Systems via a Two-Level Factorial Design of Experiments

    PubMed Central

    Fernandes, Edite M. G. P.

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as ‘erf’, is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591

  14. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  15. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGESBeta

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  16. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  17. Gray level co-occurrence matrix algorithm as pattern recognition biosensor for oxidopamine-induced changes in lymphocyte chromatin architecture.

    PubMed

    Pantic, Igor; Dimitrijevic, Draga; Nesic, Dejan; Petrovic, Danica

    2016-10-01

    We demonstrate that a proapoptotic chemical agent, oxidopamine, induces dose dependent changes in chromatin textural patterns which can be quantified using the Gray level co-occurrence matrix (GLCM) method. Peripheral blood (heparin-pretreated) samples were treated with oxidopamine (6-OHDA, 6-hydroxydopamine) to achieve effective concentrations of 100, 200 and 300µM. The samples were smeared on microscope slides and fixated in methanol. The smears were stained using a modification of Feulgen method for DNA visualization. For each stained smear, a sample of 30 lymphocyte chromatin structures were visualized and analyzed. This way, textural parameters for a total of 120 nuclei micrographs were calculated. For each chromatin structure, five different GLCM features were calculated: angular second moment, GLCM entropy, inverse difference moment, GLCM correlation, and GLCM variance. Oxidopamine induced the rise of the values of GLCM entropy and variance, and the reduction of angular second moment, correlation, and inverse difference moment. The trends for GLCM parameter changes were found to be highly significant (p<0.001). These results indicate that GLCM mathematical algorithm might be successfully used in detection and evaluation of discrete early apoptotic structural changes in Feulgen-stained chromatin of peripheral blood lymphocytes that are not detectable using conventional microscopy/cell biology techniques. PMID:27424557

  18. Assessment on the classification of landslide risk level using Genetic Algorithm of Operation Tree in central Taiwan

    NASA Astrophysics Data System (ADS)

    Wei, Chiang; Yeh, Hui-Chung; Chen, Yen-Chang

    2015-04-01

    This study assessed the classification of landslide areas by Genetic Algorithm of Operation Tree (GAOT) of Chen-Yu-Lan River upstream watershed of National Taiwan University Experimental Forest (NTUEF) after the Typhoon Morakot in 2009 using remotely and geological data. Landslides of 624.5 ha which accounting for 1.9% of total area were delineated with the threshold of slope (22°) and area size (1 hectare), 48 landslide sites were located in the upstream Chen-Yu-Lan watershed using FORMOSAT-II satellite imagery, the aerial photo and GIS related coverage. The five risk levels of these landslide areas was classified by the area, elevation, slope order, aspect, erosion order and geological factor order using the Simplicity Method suggested in the Technical Regulations for Soil and Water Conservation of Taiwan. If all the landslide sites were considered, the accuracy of classification using GAOT is 97.9%, superior than the K-means, Ward method, Shared Nearest Neighbor method, Maximum Likelihood Classifier and Bayesian Classifier; if 36 sites were used as training samples and the rest 12 sites were tested, the accuracy still can reach 81.3%. More geological data, anthropogenic influence and hydrological factors may be necessary for clarifying the landside area and the results benefit the assessment for future correction and management of the authorities.

  19. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the

  20. SeaWiFS technical report series. Volume 32: Level-3 SeaWiFS data products. Spatial and temporal binning algorithms

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Campbell, Janet W.; Blaisdell, John M.; Darzi, Michael

    1995-01-01

    The level-3 data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) are statistical data sets derived from level-2 data. Each data set will be based on a fixed global grid of equal-area bins that are approximately 9 x 9 sq km. Statistics available for each bin include the sum and sum of squares of the natural logarithm of derived level-2 geophysical variables where sums are accumulated over a binning period. Operationally, products with binning periods of 1 day, 8 days, 1 month, and 1 year will be produced and archived. From these accumulated values and for each bin, estimates of the mean, standard deviation, median, and mode may be derived for each geophysical variable. This report contains two major parts: the first (Section 2) is intended as a users' guide for level-3 SeaWiFS data products. It contains an overview of level-0 to level-3 data processing, a discussion of important statistical considerations when using level-3 data, and details of how to use the level-3 data. The second part (Section 3) presents a comparative statistical study of several binning algorithms based on CZCS and moored fluorometer data. The operational binning algorithms were selected based on the results of this study.

  1. Detecting the 11 March 2011 Tohoku tsunami arrival on sea-level records in the Pacific Ocean: application and performance of the Tsunami Early Detection Algorithm (TEDA)

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.

    2012-05-01

    Real-time detection of a tsunami on instrumental sea-level records is quite an important task for a Tsunami Warning System (TWS), and in case of alert conditions for an ongoing tsunami it is often performed by visual inspection in operational warning centres. In this paper we stress the importance of automatic detection algorithms and apply the TEDA (Tsunami Early Detection Algorithm) to identify tsunami arrivals of the 2011 Tohoku tsunami in a real-time virtual exercise. TEDA is designed to work at station level, that is on sea-level data of a single station, and was calibrated on data from the Adak island, Alaska, USA, tide-gauge station. Using the parameters' configuration devised for the Adak station, the TEDA has been applied to 123 coastal sea-level records from the coasts of the Pacific Ocean, which enabled us to evaluate the efficiency and sensitivity of the algorithm on a wide range of background conditions and of signal-to-noise ratios. The result is that TEDA is able to detect quickly the majority of the tsunami signals and therefore proves to have the potential for being a valid tool in the operational TWS practice.

  2. GOME level 1-to-2 data processor version 3.0: a major upgrade of the GOME/ERS-2 total ozone retrieval algorithm.

    PubMed

    Spurr, Robert; Loyola, Diego; Thomas, Werner; Balzer, Wolfgang; Mikusch, Eberhard; Aberle, Bernd; Slijkhuis, Sander; Ruppert, Thomas; van Roozendael, Michel; Lambert, Jean-Christopher; Soebijanta, Trisnanto

    2005-11-20

    The global ozone monitoring experiment (GOME) was launched in April 1995, and the GOME data processor (GDP) retrieval algorithm has processed operational total ozone amounts since July 1995. GDP level 1-to-2 is based on the two-step differential optical absorption spectroscopy (DOAS) approach, involving slant column fitting followed by air mass factor (AMF) conversions to vertical column amounts. We present a major upgrade of this algorithm to version 3.0. GDP 3.0 was implemented in July 2002, and the 9-year GOME data record from July 1995 to December 2004 has been processed using this algorithm. The key component in GDP 3.0 is an iterative approach to AMF calculation, in which AMFs and corresponding vertical column densities are adjusted to reflect the true ozone distribution as represented by the fitted DOAS effective slant column. A neural network ensemble is used to optimize the fast and accurate parametrization of AMFs. We describe results of a recent validation exercise for the operational version of the total ozone algorithm; in particular, seasonal and meridian errors are reduced by a factor of 2. On a global basis, GDP 3.0 ozone total column results lie between -2% and +4% of ground-based values for moderate solar zenith angles lower than 70 degrees. A larger variability of about +5% and -8% is observed for higher solar zenith angles up to 90 degrees. PMID:16318193

  3. GAGA: A New Algorithm for Genomic Inference of Geographic Ancestry Reveals Fine Level Population Substructure in Europeans

    PubMed Central

    Lao, Oscar; Liu, Fan; Wollstein, Andreas; Kayser, Manfred

    2014-01-01

    Attempts to detect genetic population substructure in humans are troubled by the fact that the vast majority of the total amount of observed genetic variation is present within populations rather than between populations. Here we introduce a new algorithm for transforming a genetic distance matrix that reduces the within-population variation considerably. Extensive computer simulations revealed that the transformed matrix captured the genetic population differentiation better than the original one which was based on the T1 statistic. In an empirical genomic data set comprising 2,457 individuals from 23 different European subpopulations, the proportion of individuals that were determined as a genetic neighbour to another individual from the same sampling location increased from 25% with the original matrix to 52% with the transformed matrix. Similarly, the percentage of genetic variation explained between populations by means of Analysis of Molecular Variance (AMOVA) increased from 1.62% to 7.98%. Furthermore, the first two dimensions of a classical multidimensional scaling (MDS) using the transformed matrix explained 15% of the variance, compared to 0.7% obtained with the original matrix. Application of MDS with Mclust, SPA with Mclust, and GemTools algorithms to the same dataset also showed that the transformed matrix gave a better association of the genetic clusters with the sampling locations, and particularly so when it was used in the AMOVA framework with a genetic algorithm. Overall, the new matrix transformation introduced here substantially reduces the within population genetic differentiation, and can be broadly applied to methods such as AMOVA to enhance their sensitivity to reveal population substructure. We herewith provide a publically available (http://www.erasmusmc.nl/fmb/resources/GAGA) model-free method for improved genetic population substructure detection that can be applied to human as well as any other species data in future studies relevant to

  4. Intermediate-level computer-vision-processing algorithm development for the content-addressable-array parallel processor. Quarterly status report No. 3 for period ending 29 November 1986

    SciTech Connect

    Not Available

    1986-12-15

    During this quarter a set of seven benchmark problems were developed and analyzed for the IUA. These included Hough Transform, Convex Hull, Voronoi Diagram, Minimal Spanning Tree, Visibility of Vertices in a projected 3-dimensional model, subgraph isomorphism, and the minimum-cost path between points in a weighted graph. These problems are commonly considered intermediate-level processing in many visions research groups parallel implementations of UMass intermediate level processing algorithms, such as Boldt's line merging and Anandan's motion analysis continued to develop. A commercial processor, the TMS320C25, was chosen as the Intermediate Communications and Associative Processor (ICAP) processing element. The TMS320C25 has the advantages that it is a five-million instruction per second signal-processing unit with a fast multiplier and software support for fast floating-point operations. It also has a built in 5 Mb/S serial port that will interface well with the intermediate-level communications network. Also being explored is a set of group-theoretic network topologies with respect to the communication needs of intermediate-level processing. This has required the analysis of the classes of communication needed in each of the algorithms implemented.

  5. An efficient, interface-preserving level set redistancing algorithm and its application to interfacial incompressible fluid flow

    SciTech Connect

    Sussman, M. . Dept. of Mathematics); Fatemi, E.

    1999-04-01

    In Sussman, Smereka, and Osher, a numerical scheme was presented for computing incompressible air-water flows using the level set method. Crucial to the above method was a new iteration method for maintaining the level set function as the signed distance from the zero level set. In this paper the authors implement a constraint along with higher order difference schemes in order to make the iteration method more accurate and efficient. Accuracy is measured in terms of the new computed signed distance function and the original level set function having the same zero level set. The authors apply the redistancing scheme to incompressible flows with noticeably better resolved results at reduced cost. They validate the results with experiment and theory. They show that the distance level set scheme with the added constraint competes well with available interface tracking schemes for basic advection of an interface. They perform basic accuracy checks and more stringent tests involving complicated interfacial structures. As with all level set schemes, the method is easy to implement.

  6. Therapeutic algorithms for the management of sexually transmitted diseases at the peripheral level in Côte d'Ivoire: assessment of efficacy and cost.

    PubMed Central

    La Ruche, G.; Lorougnon, F.; Digbeu, N.

    1995-01-01

    In the acquired immunodeficiency syndrome (AIDS) era, adequate management of sexually transmitted diseases (STDs) is a primary concern in Africa. Assessed in this study is the clinical efficacy and feasibility of WHO-recommended therapeutic algorithms for genital discharges and ulcers, diagnosed without laboratory tests, for use at the primary health care level. Drugs were sold on a cost-recovery basis and included intramuscular ceftriaxone and oral ciprofloxacin for single-dose therapy of gonorrhoea and chancroid. During April 1993 in 10 peripheral health care centres in Abidjan, Côte d'Ivoire, a total of 207 patients were followed up, including 89 cases of male urethritis, 92 cases of vaginal discharges and 26 cases of genital ulcers; clinical success, assessed 7 days after the onset of therapy, was, respectively, 92%, 87%, and 100%. Less than 10% of the 207 patients were referred to the next care level, an acceptable rate from a public health point of view. Medical adherence to the algorithms was excellent for urethral discharges and genital ulcers but poor for vaginal discharges, partly because of intentional therapeutic modifications, without detriment to success. For drugs, the average cost per cure was 1546 francs CFA (US$ 5.60) (maximum, 2980 francs CFA (US$ 10.70). Effective and affordable treatments for STDs are necessary for their realistic case management in Africa. PMID:7614662

  7. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets

    PubMed Central

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-01-01

    Abstract. Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as w, Δt, z, α, μ, α1, and α2, play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method. PMID:26158101

  8. Adaptive re-tracking algorithm for retrieval of water level variations and wave heights from satellite altimetry data for middle-sized inland water bodies

    NASA Astrophysics Data System (ADS)

    Troitskaya, Yuliya; Lebedev, Sergey; Soustova, Irina; Rybushkina, Galina; Papko, Vladislav; Baidakov, Georgy; Panyutin, Andrey

    One of the recent applications of satellite altimetry originally designed for measurements of the sea level [1] is associated with remote investigation of the water level of inland waters: lakes, rivers, reservoirs [2-7]. The altimetry data re-tracking algorithms developed for open ocean conditions (e.g. Ocean-1,2) [1] often cannot be used in these cases, since the radar return is significantly contaminated by reflection from the land. The problem of minimization of errors in the water level retrieval for inland waters from altimetry measurements can be resolved by re-tracking satellite altimetry data. Recently, special re-tracking algorithms have been actively developed for re-processing altimetry data in the coastal zone when reflection from land strongly affects echo shapes: threshold re-tracking, The other methods of re-tracking (threshold re-tracking, beta-re-tracking, improved threshold re-tracking) were developed in [9-11]. The latest development in this field is PISTACH product [12], in which retracking bases on the classification of typical forms of telemetric waveforms in the coastal zones and inland water bodies. In this paper a novel method of regional adaptive re-tracking based on constructing a theoretical model describing the formation of telemetric waveforms by reflection from the piecewise constant model surface corresponding to the geography of the region is considered. It was proposed in [13, 14], where the algorithm for assessing water level in inland water bodies and in the coastal zone of the ocean with an error of about 10-15 cm was constructed. The algorithm includes four consecutive steps: - constructing a local piecewise model of a reflecting surface in the neighbourhood of the reservoir; - solving a direct problem by calculating the reflected waveforms within the framework of the model; - imposing restrictions and validity criteria for the algorithm based on waveform modelling; - solving the inverse problem by retrieving a tracking point

  9. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  10. Optimization of water-level monitoring networks in the eastern Snake River Plain aquifer using a kriging-based genetic algorithm method

    USGS Publications Warehouse

    Fisher, Jason C.

    2013-01-01

    Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells

  11. Is increasing complexity of algorithms the price for higher accuracy? virtual comparison of three algorithms for tertiary level management of chronic cough in people living with HIV in a low-income country

    PubMed Central

    2012-01-01

    Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK) in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF) and the World Health Organisation (WHO). Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21%) and pneumocystosis (19%). Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm) are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice. PMID:22260242

  12. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  14. Comparison of Monofractal, Multifractal and gray level Co-occurrence matrix algorithms in analysis of Breast tumor microscopic images for prognosis of distant metastasis risk.

    PubMed

    Rajković, Nemanja; Kolarević, Daniela; Kanjer, Ksenija; Milošević, Nebojša T; Nikolić-Vukosavljević, Dragica; Radulovic, Marko

    2016-10-01

    Breast cancer prognosis is a subject undergoing intense study due to its high clinical relevance for effective therapeutic management and a great patient interest in disease progression. Prognostic value of fractal and gray level co-occurrence matrix texture analysis algorithms has been previously established on tumour histology images, but without any direct performance comparison. Therefore, this study was designed to compare the prognostic power of the monofractal, multifractal and co-occurrence algorithms on the same set of images. The investigation was retrospective, with 51 patients selected on account of non-metastatic IBC diagnosis, stage IIIB. Image analysis was performed on digital images of primary tumour tissue sections stained with haematoxylin/eosin. Bootstrap-corrected Cox proportional hazards regression P-values indicated a significant association with metastasis outcome of at least one of the features within each group. AUC values were far better for co-occurrence (0.66-0.77) then for fractal features (0.60-0.64). Correction by the split-sample cross-validation likewise indicated the generalizability only for the co-occurrence features, with their classification accuracies ranging between 67 and 72 %, while accuracies of monofractal and multifractal features were reduced to nearly random 52-55 %. These findings indicate for the first time that the prognostic value of texture analysis of tumour histology is less dependent on the morphological complexity of the image as measured by fractal analysis, but predominantly on the spatial distribution of the gray pixel intensities as calculated by the co-occurrence features. PMID:27549346

  15. Impacts of 21st century sea-level rise on a Danish major city - an assessment based on fine-resolution digital topography and a new flooding algorithm

    NASA Astrophysics Data System (ADS)

    Erenskjold Moeslund, Jesper; Klith Bøcher, Peder; Svenning, Jens-Christian; Mølhave, Thomas; Arge, Lars

    2009-11-01

    This study examines the potential impact of 21st century sea-level rise on Aarhus, the second largest city in Denmark, emphasizing the economic risk to the city's real estate. Furthermore, it assesses which possible adaptation measures that can be taken to prevent flooding in areas particularly at risk from flooding. We combine a new national Digital Elevation Model in very fine resolution (~2 meter), a new highly computationally efficient flooding algorithm that accurately models the influence of barriers, and geospatial data on real-estate values to assess the economic real-estate risk posed by future sea-level rise to Aarhus. Under the A2 and A1FI (IPCC) climate scenarios we show that relatively large residential areas in the northern part of the city as well as areas around the river running through the city are likely to become flooded in the event of extreme, but realistic weather events. In addition, most of the large Aarhus harbour would also risk flooding. As much of the area at risk represent high-value real estate, it seems clear that proactive measures other than simple abandonment should be taken in order to avoid heavy economic losses. Among the different possibilities for dealing with an increased sea level, the strategic placement of flood-gates at key potential water-inflow routes and the construction or elevation of existing dikes seems to be the most convenient, most socially acceptable, and maybe also the cheapest solution. Finally, we suggest that high-detail flooding models similar to those produced in this study will become an important tool for a climate-change-integrated planning of future city development as well as for the development of evacuation plans.

  16. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  17. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  18. Land use zoning at the county level based on a multi-objective particle swarm optimization algorithm: a case study from Yicheng, China.

    PubMed

    Liu, Yaolin; Wang, Hua; Ji, Yingli; Liu, Zhongqiu; Zhao, Xiang

    2012-08-01

    Comprehensive land-use planning (CLUP) at the county level in China must include land-use zoning. This is specifically stipulated by the China Land Management Law and aims to achieve strict control on the usages of land. The land-use zoning problem is treated as a multi-objective optimization problem (MOOP) in this article, which is different from the traditional treatment. A particle swarm optimization (PSO) based model is applied to the problem and is developed to maximize the attribute differences between land-use zones, the spatial compactness, the degree of spatial harmony and the ecological benefits of the land-use zones. This is subject to some constraints such as: the quantity limitations for varying land-use zones, regulations assigning land units to a certain land-use zone, and the stipulation of a minimum parcel area in a land-use zoning map. In addition, a crossover and mutation operator from a genetic algorithm is adopted to avoid the prematurity of PSO. The results obtained for Yicheng, a county in central China, using different objective weighting schemes, are compared and suggest that: (1) the fundamental demand for attribute difference between land-use zones leads to a mass of fragmentary land-use zones; (2) the spatial pattern of land-use zones is remarkably optimized when a weight is given to the sub-objectives of spatial compactness and the degree of spatial harmony, simultaneously, with a reduction of attribute difference between land-use zones; (3) when a weight is given to the sub-objective of ecological benefits of the land-use zones, the ecological benefits get a slight increase also at the expense of a reduction in attribute difference between land-use zones; (4) the pursuit of spatial harmony or spatial compactness may have a negative effect on each other; (5) an increase in the ecological benefits may improve the spatial compactness and spatial harmony of the land-use zones; (6) adjusting the weights assigned to each sub-objective can

  19. Land Use Zoning at the County Level Based on a Multi-Objective Particle Swarm Optimization Algorithm: A Case Study from Yicheng, China

    PubMed Central

    Liu, Yaolin; Wang, Hua; Ji, Yingli; Liu, Zhongqiu; Zhao, Xiang

    2012-01-01

    Comprehensive land-use planning (CLUP) at the county level in China must include land-use zoning. This is specifically stipulated by the China Land Management Law and aims to achieve strict control on the usages of land. The land-use zoning problem is treated as a multi-objective optimization problem (MOOP) in this article, which is different from the traditional treatment. A particle swarm optimization (PSO) based model is applied to the problem and is developed to maximize the attribute differences between land-use zones, the spatial compactness, the degree of spatial harmony and the ecological benefits of the land-use zones. This is subject to some constraints such as: the quantity limitations for varying land-use zones, regulations assigning land units to a certain land-use zone, and the stipulation of a minimum parcel area in a land-use zoning map. In addition, a crossover and mutation operator from a genetic algorithm is adopted to avoid the prematurity of PSO. The results obtained for Yicheng, a county in central China, using different objective weighting schemes, are compared and suggest that: (1) the fundamental demand for attribute difference between land-use zones leads to a mass of fragmentary land-use zones; (2) the spatial pattern of land-use zones is remarkably optimized when a weight is given to the sub-objectives of spatial compactness and the degree of spatial harmony, simultaneously, with a reduction of attribute difference between land-use zones; (3) when a weight is given to the sub-objective of ecological benefits of the land-use zones, the ecological benefits get a slight increase also at the expense of a reduction in attribute difference between land-use zones; (4) the pursuit of spatial harmony or spatial compactness may have a negative effect on each other; (5) an increase in the ecological benefits may improve the spatial compactness and spatial harmony of the land-use zones; (6) adjusting the weights assigned to each sub-objective can

  20. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  1. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  2. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  3. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  4. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  5. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  6. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  7. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  8. Pulse wave velocity as marker of preclinical arterial disease: reference levels in a uruguayan population considering wave detection algorithms, path lengths, aging, and blood pressure.

    PubMed

    Farro, Ignacio; Bia, Daniel; Zócalo, Yanina; Torrado, Juan; Farro, Federico; Florio, Lucía; Olascoaga, Alicia; Alallón, Walter; Lluberas, Ricardo; Armentano, Ricardo L

    2012-01-01

    Carotid-femoral pulse wave velocity (PWV) has emerged as the gold standard for non-invasive evaluation of aortic stiffness; absence of standardized methodologies of study and lack of normal and reference values have limited a wider clinical implementation. This work was carried out in a Uruguayan (South American) population in order to characterize normal, reference, and threshold levels of PWV considering normal age-related changes in PWV and the prevailing blood pressure level during the study. A conservative approach was used, and we excluded symptomatic subjects; subjects with history of cardiovascular (CV) disease, diabetes mellitus or renal failure; subjects with traditional CV risk factors (other than age and gender); asymptomatic subjects with atherosclerotic plaques in carotid arteries; patients taking anti-hypertensives or lipid-lowering medications. The included subjects (n = 429) were categorized according to the age decade and the blood pressure levels (at study time). All subjects represented the "reference population"; the group of subjects with optimal/normal blood pressures levels at study time represented the "normal population." Results. Normal and reference PWV levels were obtained. Differences in PWV levels and aging-associated changes were obtained. The obtained data could be used to define vascular aging and abnormal or disease-related arterial changes. PMID:22666551

  9. Pulse Wave Velocity as Marker of Preclinical Arterial Disease: Reference Levels in a Uruguayan Population Considering Wave Detection Algorithms, Path Lengths, Aging, and Blood Pressure

    PubMed Central

    Farro, Ignacio; Bia, Daniel; Zócalo, Yanina; Torrado, Juan; Farro, Federico; Florio, Lucía; Olascoaga, Alicia; Alallón, Walter; Lluberas, Ricardo; Armentano, Ricardo L.

    2012-01-01

    Carotid-femoral pulse wave velocity (PWV) has emerged as the gold standard for non-invasive evaluation of aortic stiffness; absence of standardized methodologies of study and lack of normal and reference values have limited a wider clinical implementation. This work was carried out in a Uruguayan (South American) population in order to characterize normal, reference, and threshold levels of PWV considering normal age-related changes in PWV and the prevailing blood pressure level during the study. A conservative approach was used, and we excluded symptomatic subjects; subjects with history of cardiovascular (CV) disease, diabetes mellitus or renal failure; subjects with traditional CV risk factors (other than age and gender); asymptomatic subjects with atherosclerotic plaques in carotid arteries; patients taking anti-hypertensives or lipid-lowering medications. The included subjects (n = 429) were categorized according to the age decade and the blood pressure levels (at study time). All subjects represented the “reference population”; the group of subjects with optimal/normal blood pressures levels at study time represented the “normal population.” Results. Normal and reference PWV levels were obtained. Differences in PWV levels and aging-associated changes were obtained. The obtained data could be used to define vascular aging and abnormal or disease-related arterial changes. PMID:22666551

  10. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  11. Forecasting Caspian Sea level changes using satellite altimetry data (June 1992-December 2013) based on evolutionary support vector regression algorithms and gene expression programming

    NASA Astrophysics Data System (ADS)

    Imani, Moslem; You, Rey-Jer; Kuo, Chung-Yen

    2014-10-01

    Sea level forecasting at various time intervals is of great importance in water supply management. Evolutionary artificial intelligence (AI) approaches have been accepted as an appropriate tool for modeling complex nonlinear phenomena in water bodies. In the study, we investigated the ability of two AI techniques: support vector machine (SVM), which is mathematically well-founded and provides new insights into function approximation, and gene expression programming (GEP), which is used to forecast Caspian Sea level anomalies using satellite altimetry observations from June 1992 to December 2013. SVM demonstrates the best performance in predicting Caspian Sea level anomalies, given the minimum root mean square error (RMSE = 0.035) and maximum coefficient of determination (R2 = 0.96) during the prediction periods. A comparison between the proposed AI approaches and the cascade correlation neural network (CCNN) model also shows the superiority of the GEP and SVM models over the CCNN.

  12. Characterizing stand-level forest canopy cover and height using Landsat time series, samples of airborne LiDAR, and the Random Forest algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.

    2015-03-01

    Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.

  13. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  14. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  15. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  17. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).

  18. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  19. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  20. An onboard star identification algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Kong; Femiano, Michael

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  1. An onboard star identification algorithm

    NASA Technical Reports Server (NTRS)

    Ha, Kong; Femiano, Michael

    1993-01-01

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  2. The First Results of Testing Methods and Algorithms for Automatic Real Time Identification of Waveforms Introduction from Local Earthquakes in Increased Level of Man-induced Noises for the Purposes of Ultra-short-term Warning about an Occurred Earthquake

    NASA Astrophysics Data System (ADS)

    Gravirov, V. V.; Kislov, K. V.

    2009-12-01

    The chief hazard posed by earthquakes consists in their suddenness. The number of earthquakes annually recorded is in excess of 100,000; of these, over 1000 are strong ones. Great human losses usually occur because no devices exist for advance warning of earthquakes. It is therefore high time that mobile information automatic systems should be developed for analysis of seismic information at high levels of manmade noise. The systems should be operated in real time with the minimum possible computational delays and be able to make fast decisions. The chief statement of the project is that sufficiently complete information about an earthquake can be obtained in real time by examining its first onset as recorded by a single seismic sensor or a local seismic array. The essential difference from the existing systems consists in the following: analysis of local seismic data at high levels of manmade noise (that is, when the noise level may be above the seismic signal level), as well as self-contained operation. The algorithms developed during the execution of the project will be capable to be used with success for individual personal protection kits and for warning the population in earthquake-prone areas over the world. The system being developed for this project uses P and S waves as well. The difference in the velocities of these seismic waves permits a technique to be developed for identifying a damaging earthquake. Real time analysis of first onsets yields the time that remains before surface waves arrive and the damage potential of these waves. Estimates show that, when the difference between the earthquake epicenter and the monitored site is of order 200 km, the time difference between the arrivals of P waves and surface waves will be about 30 seconds, which is quite sufficient to evacuate people from potentially hazardous space, insertion of moderators at nuclear power stations, pipeline interlocking, transportation stoppage, warnings issued to rescue services

  3. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  4. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  5. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  6. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  7. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  8. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  9. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  10. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  11. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  12. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  13. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  14. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  15. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  16. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  17. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  18. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  19. Hybrid algorithm for NARX network parameters' determination using differential evolution and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Salami, M. J. E.; Tijani, I. B.; Abdullateef, A. I.; Aibinu, M. A.

    2013-12-01

    A hybrid optimization algorithm using Differential Evolution (DE) and Genetic Algorithm (GA) is proposed in this study to address the problem of network parameters determination associated with the Nonlinear Autoregressive with eXogenous inputs Network (NARX-network). The proposed algorithm involves a two level optimization scheme to search for both optimal network architecture and weights. The DE at the upper level is formulated as combinatorial optimization to search for the network architecture while the associated network weights that minimize the prediction error is provided by the GA at the lower level. The performance of the algorithm is evaluated on identification of a laboratory rotary motion system. The system identification results show the effectiveness of the proposed algorithm for nonparametric model development.

  20. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  1. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  2. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  3. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  4. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  5. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  6. Optimisation algorithms for microarray biclustering.

    PubMed

    Perrin, Dimitri; Duhamel, Christophe

    2013-01-01

    In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756

  7. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  8. Function-Based Algorithms for Biological Sequences

    ERIC Educational Resources Information Center

    Mohanty, Pragyan Sheela P.

    2015-01-01

    Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…

  9. Promoting Understanding of Linear Equations with the Median-Slope Algorithm

    ERIC Educational Resources Information Center

    Edwards, Michael Todd

    2005-01-01

    The preliminary findings resulting when invented algorithm is used with entry-level students while introducing linear equations is described. As calculations are accessible, the algorithm is preferable to more rigorous statistical procedures in entry-level classrooms.

  10. Adaptive mesh and algorithm refinement using direct simulation Monte Carlo

    SciTech Connect

    Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  11. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  12. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  13. GOES-R Algorithm Working Group (AWG)

    NASA Astrophysics Data System (ADS)

    Daniels, Jaime; Goldberg, Mitch; Wolf, Walter; Zhou, Lihang; Lowe, Kenneth

    2009-08-01

    For the next-generation of GOES-R instruments to meet stated performance requirements, state-of-the-art algorithms will be needed to convert raw instrument data to calibrated radiances and derived geophysical parameters (atmosphere, land, ocean, and space weather). The GOES-R Program Office (GPO) assigned the NOAA/NESDIS Center for Satellite Research and Applications (STAR) the responsibility for technical leadership and management of GOES-R algorithm development and calibration/validation. STAR responded with the creation of the GOES-R Algorithm Working Group (AWG) to manage and coordinate development and calibration/validation activities for GOES-R proxy data and geophysical product algorithms. The AWG consists of 15 application teams that bring expertise in product algorithms that span atmospheric, land, oceanic, and space weather disciplines. Each AWG teams will develop new scientific Level- 2 algorithms for GOES-R and will also leverage science developments from other communities (other government agencies, universities and industry), and heritage approaches from current operational GOES and POES product systems. All algorithms will be demonstrated and validated in a scalable operational demonstration environment. All software developed by the AWG will adhere to new standards established within NOAA/NESDIS. The AWG Algorithm Integration Team (AIT) has the responsibility for establishing the system framework, integrating the product software from each team into this framework, enforcing the established software development standards, and preparing system deliveries. The AWG will deliver an Algorithm Theoretical Basis Document (ATBD) for each GOES-R geophysical product as well as Delivered Algorithm Packages (DAPs) to the GPO.

  14. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  15. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  16. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  17. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  18. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  19. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  20. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  1. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  2. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  3. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  4. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  5. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  6. Birefringent filter design by use of a modified genetic algorithm.

    PubMed

    Wen, Mengtao; Yao, Jianping

    2006-06-10

    A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031

  7. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  8. Passive MMW algorithm performance characterization using MACET

    NASA Astrophysics Data System (ADS)

    Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.

    1997-06-01

    As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.

  9. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  10. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  11. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  12. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  13. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  14. Machine Protection System algorithm compiler and simulator

    SciTech Connect

    White, G.R.; Sherwin, G.

    1993-04-01

    The Machine Protection System (MPS) component of the SLC`s beam selection system, in which integrated current is continuously monitored and limited to safe levels through careful selection and feedback of the beam repetition rate, is described elsewhere in these proceedings. The novel decision making mechanism by which that system can evaluate ``safe levels,`` and choose an appropriate repetition rate in real-time, is described here. The algorithm that this mechanism uses to make its decision is written in text files and expressed in states of the accelerator and its devices, one file per accelerator region. Before being used, a file is ``compiled`` to a binary format which can be easily processed as a forward-chaining decision tree. It is processed by distributed microcomputers local to the accelerator regions. A parent algorithm evaluates all results, and reports directly to the beam control microprocessor. Operators can test new algorithms, or changes they make to them, with an online graphical NPS simulator.

  15. The CDF LEVEL3 trigger

    SciTech Connect

    Carroll, T.; Joshi, U.; Auchincloss, P.

    1989-04-01

    CDF is currently taking data at a luminosity of 10{sup 30} cm{sup -2} sec{sup -1} using a four level event filtering scheme. The fourth level, LEVEL3, uses ACP (Fermilab`s Advanced Computer Program) designed 32 bit VME based parallel processors (1) capable of executing algorithms written in FORTRAN. LEVEL3 currently rejects about 50% of the events.

  16. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  17. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  18. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Predictive Caching Using the TDAG Algorithm

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.

  1. A Monotonically Convergent Algorithm for FACTALS.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1993-01-01

    A new procedure is proposed for handling nominal variables in the analysis of variables of mixed measurement levels, and a procedure is developed for handling ordinal variables. Using these procedures, a monotonically convergent algorithm is constructed for the FACTALS method for any mixture of variables. (SLD)

  2. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  3. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  4. Face recognition algorithms surpass humans matching faces over changes in illumination.

    PubMed

    O'Toole, Alice J; Jonathon Phillips, P; Jiang, Fang; Ayyad, Janet; Penard, Nils; Abdi, Hervé

    2007-09-01

    There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a facematching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control--humans. PMID:17627050

  5. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  6. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  7. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  8. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  9. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  10. Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.

    ERIC Educational Resources Information Center

    Lane, Irving M.; And Others

    1981-01-01

    Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…

  11. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  12. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  13. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than line relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. The parallel implementation of a V-cycle multiple semi-coarsened grid (MSG) algorithm or distributed-memory architectures such as the Intel iPSC/860 and Paragon computers is addressed. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. A mapping of an MSG algorithm to distributed-memory architectures that demonstrate how both levels of parallelism can be exploited is described. The results is a robust and effective multigrid algorithm for distributed-memory machines.

  14. The SISCone and anti-k_t jet algorithms

    SciTech Connect

    Soyez,G.

    2008-04-07

    We illustrate how the midpoint and iterative cone (with progressive removal) algorithms fail to satisfy the fundamental requirements of infrared and collinear safety, causing divergences in the perturbative expansion. We introduce SISCone and the anti-k{sub t} algorithms as respective replacements that do not have those failures without any cost at the experimental level.

  15. A general algorithm for the construction of contour plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1981-01-01

    An algorithm is described that performs the task of drawing equal level contours on a plane, which requires interpolation in two dimensions based on data prescribed at points distributed irregularly over the plane. The approach is described in detail. The computer program that implements the algorithm is documented and listed.

  16. Traffic Noise Ground Attenuation Algorithm Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, Lloyd Allen

    The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers

  17. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  18. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  19. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  20. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  1. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  2. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  3. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  4. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  5. MOPITT V7 Level 1 & Level 2 Release Announcement

    Atmospheric Science Data Center

    2016-08-02

    MOPITT V7 Level 1 & Level 2 Release Announcement Wednesday, August 10, 2016 ... Infrared Radiances) •   MOP01    - MOPITT Level 1 Radiances   Several significant retrieval algorithm and product ... Featured improvements in the V7 retrieval products include (1) the representation of changing atmospheric concentrations of N2O, (2) ...

  6. Algorithmic cooling in liquid-state nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Atia, Yosi; Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2016-01-01

    Algorithmic cooling is a method that employs thermalization to increase qubit purification level; namely, it reduces the qubit system's entropy. We utilized gradient ascent pulse engineering, an optimal control algorithm, to implement algorithmic cooling in liquid-state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of C132-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic-resonance spectroscopy.

  7. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  8. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  9. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  10. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  11. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  12. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  13. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  14. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  15. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  16. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  17. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  18. [Algorithm for treating preoperative anemia].

    PubMed

    Bisbe Vives, E; Basora Macaya, M

    2015-06-01

    Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics. PMID:26320341

  19. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  20. Factorization using the quadratic sieve algorithm

    SciTech Connect

    Davis, J.A.; Holdridge, D.B.

    1983-12-01

    Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.

  1. Factorization using the quadratic sieve algorithm

    SciTech Connect

    Davis, J.A.; Holdridge, D.B.

    1983-01-01

    Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.

  2. New convergence estimates for multigrid algorithms

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1987-10-01

    In this paper, new convergence estimates are proved for both symmetric and nonsymmetric multigrid algorithms applied to symmetric positive definite problems. Our theory relates the convergence of multigrid algorithms to a ''regularity and approximation'' parameter ..cap alpha.. epsilon (0, 1) and the number of relaxations m. We show that for the symmetric and nonsymmetric ..nu.. cycles, the multigrid iteration converges for any positive m at a rate which deteriorates no worse than 1-cj/sup -(1-//sup ..cap alpha..//sup )///sup ..cap alpha../, where j is the number of grid levels. We then define a generalized ..nu.. cycle algorithm which involves exponentially increasing (for example, doubling) the number of smoothings on successively coarser grids. We show that the resulting symmetric and nonsymmetric multigrid iterations converge for any ..cap alpha.. with rates that are independent of the mesh size. The theory is presented in an abstract setting which can be applied to finite element multigrid and finite difference multigrid methods.

  3. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  4. SLAP lesions: a treatment algorithm.

    PubMed

    Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf

    2016-02-01

    Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair. PMID:26818554

  5. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  6. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  7. A set-membership approach to blind channel equalization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yue-ming

    2013-03-01

    The constant modulus algorithm (CMA) has low computational complexity while presenting slow convergence and possible convergence to local minima, the CMA family of algorithms based on affine projection version (AP-CMA) alleviates the speed limitations of the CMA. However, the computational complexity has been a weak point in the implementation of AP-CMA. To reduce the computational complexity of the adaptive filtering algorithm, a new AP-CMA algorithm based on set membership (SM-AP-CMA) is proposed. The new algorithm combines a bounded error specification on the adaptive filter with the concept of data-reusing. Simulations confirmed that the convergence rate of the proposed algorithm is significantly faster; meanwhile, the excess mean square error can be maintained in a relatively low level and a substantial reduction in the number of updates when compared with its conventional counterpart.

  8. A controllable sensor management algorithm capable of learning

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  9. An Algorithm for Autonomous Formation Obstacle Avoidance

    NASA Astrophysics Data System (ADS)

    Cruz, Yunior I.

    The level of human interaction with Unmanned Aerial Systems varies greatly from remotely piloted aircraft to fully autonomous systems. In the latter end of the spectrum, the challenge lies in designing effective algorithms to dictate the behavior of the autonomous agents. A swarm of autonomous Unmanned Aerial Vehicles requires collision avoidance and formation flight algorithms to negotiate environmental challenges it may encounter during the execution of its mission, which may include obstacles and chokepoints. In this work, a simple algorithm is developed to allow a formation of autonomous vehicles to perform point to point navigation while avoiding obstacles and navigating through chokepoints. Emphasis is placed on maintaining formation structures. Rather than breaking formation and individually navigating around the obstacle or through the chokepoint, vehicles are required to assemble into appropriately sized/shaped sub-formations, bifurcate around the obstacle or negotiate the chokepoint, and reassemble into the original formation at the far side of the obstruction. The algorithm receives vehicle and environmental properties as inputs and outputs trajectories for each vehicle from start to the desired ending location. Simulation results show that the algorithm safely routes all vehicles past the obstruction while adhering to the aforementioned requirements. The formation adapts and successfully negotiates the obstacles and chokepoints in its path while maintaining proper vehicle separation.

  10. Connected-Health Algorithm: Development and Evaluation.

    PubMed

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance. PMID:26922593

  11. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  12. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  13. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  14. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  15. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  16. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  17. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  18. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  19. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  20. Algorithm For Automatic Road Recognition On Digitized Map Images

    NASA Astrophysics Data System (ADS)

    Zhu, Zhipu; Kim, Yongmin

    1989-09-01

    This paper presents an algorithm to detect road lines on digitized map images. This algorithm detects road lines based on object shape (line thickness) and gray level values. The road detection process is accomplished in two steps: road line extraction and road tracking. The road line extraction consists of level slicing, morphological filtering, and connected component analysis. The road tracking routine is capable of connecting broken road lines caused by the overlapping of text labels. The algorithm has been implemented on an IBM PC/AT-based image processing system and applied to various map images.

  1. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  2. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  3. Algorithms for physical segregation of coal

    NASA Astrophysics Data System (ADS)

    Ganguli, Rajive

    The capability for on-line measurement of the quality characteristics of conveyed coal now enables mine operators to take advantage of the inherent heterogeneity of those streams and split them into wash and no-wash stocks. Relative to processing the entire stream, this reduces the amount of coal that must be washed at the mine and thereby reduces processing costs, recovery losses, and refuse generation levels. In this dissertation, two classes of segregation algorithms, using time series models and moving windows are developed and demonstrated using field and simulated data. In all of the developed segregation algorithms, a "cut-off" ash value was computed for coal scanned on the running conveyor belt by the ash analyzer. It determined if the coal was sent to the wash pile or to the nowash pile. Forecasts from time series models, at various lead times ahead, were used in one class of the developed algorithms, to determine the cut-off ash levels. The time series models were updated from time to time to reflect changes in process. Statistical Process Control (SPC) techniques were used to determine if an update was necessary at a given time. When an update was deemed necessary, optimization techniques were used to determine the next best set of model parameters. In the other class of segregation algorithms, "few" of the immediate past observations were used to determine the cut-off ash value. These "few" observations were called the window width . The window width was kept constant in some variants of this class of algorithms. The other variants of this class were an improvement over the fixed window width algorithms. Here, the window widths were varied rather than kept constant. In these cases, SPC was used to determine the window width at any instant. Statistics of the empirical distribution and the normal distribution were used in computation of the cut-off ash value in all the variants of this class of algorithms. The good performance of the developed algorithms

  4. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  5. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  6. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  7. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  8. Bit-level systolic arrays

    SciTech Connect

    De Groot, A.J.

    1989-01-01

    In this dissertation the author considered the design of bit - level systolic arrays where the basic computational unit consists of a simple one - bit logic unit, so that the systolic process is carried out at the level of individual bits. In order to pursue the foregoing research, several areas have been studied. First, the concept of systolic processing has been investigated. Several important algorithms were investigated and put into systolic form using graph-theoretic methods. The bit-level, word-level and block-level systolic arrays which have been designed for these algorithms exhibit linear speedup with respect to the number of processors and exhibit efficiency close to 100%, even with low interprocessor communication bandwidth. Block-level systolic arrays deal with blocks of data with block-level operations and communications. Block-level systolic arrays improve cell efficiency and are more efficient than their word-level counterparts. A comparison of bit-level, word-level and block-level systolic arrays was performed. In order to verify the foregoing theory and analysis a systolic processor called the SPRINT was developed to provide and environment where bit-level, word-level and block-level systolic algorithms could be confirmed by direct implementation rather than by computer simulation. The SPRINT is a supercomputer class, 64-element multiprocessor with a reconfigurable interconnection network. The theory has been confirmed by the execution on the SPRINT of the bit-level, word-level, and block-level systolic algorithms presented in the dissertation.

  9. The algorithmic origins of life

    PubMed Central

    Walker, Sara Imari; Davies, Paul C. W.

    2013-01-01

    Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265

  10. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  11. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  12. Control Algorithms For Liquid-Cooled Garments

    NASA Technical Reports Server (NTRS)

    Drew, B.; Harner, K.; Hodgson, E.; Homa, J.; Jennings, D.; Yanosy, J.

    1988-01-01

    Three algorithms developed for control of cooling in protective garments. Metabolic rate inferred from temperatures of cooling liquid outlet and inlet, suitably filtered to account for thermal lag of human body. Temperature at inlet adjusted to value giving maximum comfort at inferred metabolic rate. Applicable to space suits, used for automatic control of cooling in suits worn by workers in radioactive, polluted, or otherwise hazardous environments. More effective than manual control, subject to frequent, overcompensated adjustments as level of activity varies.

  13. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  14. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  15. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  16. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  17. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  18. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  19. Implicit level set algorithms for modelling hydraulic fracture propagation.

    PubMed

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. PMID:27597787

  20. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  1. ICESat-2 / ATLAS Flight Science Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.

    2013-12-01

    NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in 2016 with a 3 year mission lifetime. The ICESat-2 orbital altitude will be 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of eliminating solar background noise in near real-time, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allow the instrument to telemeter only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and an onboard relief map. Similar to what was flown on the ATLAS predecessor GLAS (Geoscience Laser Altimeter System) the DEM provides minimum and maximum heights for each 1 degree x 1 degree tile on the Earth. This information allows the onboard algorithm to limit its signal search to the region between minimum and maximum heights (plus some margin for errors). The understanding that the surface echoes will tend to clump while noise will be randomly distributed led us to histogram the received event times. The selection of the signal locations is based on those histogram bins with statistically significant counts. Once the signal location has been established the onboard Digital Relief Map (DRM) is used to determine the vertical width of the telemetry band about the signal. The ATLAS Receiver Algorithms are nearing completion of the development phase and are currently being tested using a Monte Carlo Software Simulator that models the instrument, the orbit and the environment

  2. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  3. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Astrophysics Data System (ADS)

    Bahethi, O. P.

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  4. Convergence behavior of a new DSMC algorithm.

    SciTech Connect

    Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert; Bird, Graeme A.

    2008-10-01

    The convergence rate of a new direct simulation Monte Carlo (DSMC) method, termed 'sophisticated DSMC', is investigated for one-dimensional Fourier flow. An argon-like hard-sphere gas at 273.15K and 266.644Pa is confined between two parallel, fully accommodating walls 1mm apart that have unequal temperatures. The simulations are performed using a one-dimensional implementation of the sophisticated DSMC algorithm. In harmony with previous work, the primary convergence metric studied is the ratio of the DSMC-calculated thermal conductivity to its corresponding infinite-approximation Chapman-Enskog theoretical value. As discretization errors are reduced, the sophisticated DSMC algorithm is shown to approach the theoretical values to high precision. The convergence behavior of sophisticated DSMC is compared to that of original DSMC. The convergence of the new algorithm in a three-dimensional implementation is also characterized. Implementations using transient adaptive sub-cells and virtual sub-cells are compared. The new algorithm is shown to significantly reduce the computational resources required for a DSMC simulation to achieve a particular level of accuracy, thus improving the efficiency of the method by a factor of 2.

  5. Obstacle Detection Algorithms for Rotorcraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)

    2001-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.

  6. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  7. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722

  8. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  9. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  10. A Simplified Pattern Match Algorithm for Star Identification

    NASA Technical Reports Server (NTRS)

    Lee, Michael H.

    1996-01-01

    A true pattern matching star algorithm similar in concept to the Van Bezooijen algorithm is implemented using an iterative approach. This approach allows for a more compact and simple implementation which can be easily adapted to be either an all-sky, no a priori algorithm or a follow on to a direct match algorithm to distinguish between ambiguous matches. Some simple analysis is shown to indicate the likelihood of mis-identifications. The performance of the algorithm for the all-sky, no a priori situation is detailed assuming he SKYMAP star catalog describes the true sky. The impact of errors and omissions in the SKYMAP catalog on performance are investigated. In addition, differing levels of noise in the star observations are assumed and results shown. The implications for possible implementation on-board spacecraft are discussed.

  11. Simple-Random-Sampling-Based Multiclass Text Classification Algorithm

    PubMed Central

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  12. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  13. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  14. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  15. A new algorithm for 3D reconstruction from support functions.

    PubMed

    Gardner, Richard J; Kiderlen, Markus

    2009-03-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab, and it works for both 2D and 3D reconstructions (in fact, in principle, in any dimension). Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement directions except their number, a limitation dictated only by computing time. An algorithm due to Prince and Willsky was implemented earlier for 2D reconstructions, and we compare the performance of their algorithm and ours. But our algorithm is the first that works for 3D reconstructions with the freedom stated in the previous paragraph. Moreover, under mild conditions, theory guarantees that outputs of the new algorithm will converge to the input shape as the number of measurements increases. In addition we offer a linear program version of the new algorithm that is much faster and better, or at least comparable, in performance at low levels of noise and reasonably small numbers of measurements. Another modification of the algorithm, suitable for use in a "focus of attention" scheme, is also described. PMID:19147881

  16. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  17. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  18. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  19. Efficient scalable algorithms for hierarchically semiseparable matrices

    SciTech Connect

    Wang, Shen; Xia, Jianlin; Situ, Yingchong; Hoop, Maarten V. de

    2011-09-14

    Hierarchically semiseparable (HSS) matrix algorithms are emerging techniques in constructing the superfast direct solvers for both dense and sparse linear systems. Here, we develope a set of novel parallel algorithms for the key HSS operations that are used for solving large linear systems. These include the parallel rank-revealing QR factorization, the HSS constructions with hierarchical compression, the ULV HSS factorization, and the HSS solutions. The HSS tree based parallelism is fully exploited at the coarse level. The BLACS and ScaLAPACK libraries are used to facilitate the parallel dense kernel operations at the ne-grained level. We have appplied our new parallel HSS-embedded multifrontal solver to the anisotropic Helmholtz equations for seismic imaging, and were able to solve a linear system with 6.4 billion unknowns using 4096 processors, in about 20 minutes. The classical multifrontal solver simply failed due to high demand of memory. To our knowledge, this is the first successful demonstration of employing the HSS algorithms in solving the truly large-scale real-world problems. Our parallel strategies can be easily adapted to the parallelization of the other rank structured methods.

  20. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  1. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  2. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  3. CORDIC algorithms in four dimensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc; Hsiao, Shen-Fu

    1990-11-01

    CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.

  4. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  5. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  6. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  7. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-03-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  8. Data-adaptive algorithms for calling alleles in repeat polymorphisms.

    PubMed

    Stoughton, R; Bumgarner, R; Frederick, W J; McIndoe, R A

    1997-01-01

    Data-adaptive algorithms are presented for separating overlapping signatures of heterozygotic allele pairs in electrophoresis data. Application is demonstrated for human microsatellite CA-repeat polymorphisms in LiCor 4000 and ABI 373 data. The algorithms allow overlapping alleles to be called correctly in almost every case where a trained observer could do so, and provide a fast automated objective alternative to human reading of the gels. The algorithm also supplies an indication of confidence level which can be used to flag marginal cases for verification by eye, or as input to later stages of statistical analysis. PMID:9059812

  9. Rate control algorithm based on frame complexity estimation for MVC

    NASA Astrophysics Data System (ADS)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  10. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  11. Comparative study of texture detection and classification algorithms

    NASA Astrophysics Data System (ADS)

    Koltsov, P. P.

    2011-08-01

    A description and results of application of the computer system PETRA (performance evaluation of texture recognition algorithms) are given. This system is designed for the comparative study of texture analysis algorithms; it includes a database of textured images and a collection of software implementations of texture analysis algorithms. The functional capabilities of the system are illustrated using texture classification examples. Test examples are taken from the Brodatz album, MeasTech database, and a set of aerospace images. Results of a comparative evaluation of five well-known texture analysis methods are described—Gabor filters, Laws masks, ring/wedge filters, gray-level cooccurrence matrices (GLCMs), and autoregression image model.

  12. Acquisition and track algorithms for the Astros star tracker

    NASA Technical Reports Server (NTRS)

    Shalom, E.; Alexander, J. W.; Stanton, R. H.

    1985-01-01

    The Astros star tracker has been designed for an employment with the Space Shuttle. An achievement of the performance levels needed has required critical trade-offs between the hardware design and the control algorithms. This paper provides a description of the development of the acquisition and track algorithms. Attention is given to an Astros system overview, a system firmware description, cluster evaluation, guide star selection, exposure time determination, video data input, update interval timing, exposure time sequencing full frame video A/D conversion, analog threshold for acquisition, minimum threshold determination, and the theoretical basis for the track algorithm.

  13. Decision Making Algorithm for Adult Spinal Deformity Surgery

    PubMed Central

    Kim, Yongjung J.; Cheh, Gene; Cho, Samuel K.; Rhim, Seung-Chul

    2016-01-01

    Adult spinal deformity (ASD) is one of the most challenging spinal disorders associated with broad range of clinical and radiological presentation. Correct selection of fusion levels in surgical planning for the management of adult spinal deformity is a complex task. Several classification systems and algorithms exist to assist surgeons in determining the appropriate levels to be instrumented. In this study, we describe our new simple decision making algorithm and selection of fusion level for ASD surgery in terms of adult idiopathic idiopathic scoliosis vs. degenerative scoliosis. PMID:27446511

  14. Decision Making Algorithm for Adult Spinal Deformity Surgery.

    PubMed

    Kim, Yongjung J; Hyun, Seung-Jae; Cheh, Gene; Cho, Samuel K; Rhim, Seung-Chul

    2016-07-01

    Adult spinal deformity (ASD) is one of the most challenging spinal disorders associated with broad range of clinical and radiological presentation. Correct selection of fusion levels in surgical planning for the management of adult spinal deformity is a complex task. Several classification systems and algorithms exist to assist surgeons in determining the appropriate levels to be instrumented. In this study, we describe our new simple decision making algorithm and selection of fusion level for ASD surgery in terms of adult idiopathic idiopathic scoliosis vs. degenerative scoliosis. PMID:27446511

  15. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  16. Transitionless driving on adiabatic search algorithm

    SciTech Connect

    Oh, Sangchul; Kais, Sabre

    2014-12-14

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.

  17. Transitionless driving on adiabatic search algorithm

    NASA Astrophysics Data System (ADS)

    Oh, Sangchul; Kais, Sabre

    2014-12-01

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.

  18. Transitionless driving on adiabatic search algorithm.

    PubMed

    Oh, Sangchul; Kais, Sabre

    2014-12-14

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics. PMID:25494733

  19. Pure field theories and MACSYMA algorithms

    NASA Technical Reports Server (NTRS)

    Ament, W. S.

    1977-01-01

    A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.

  20. Genetic algorithm for disassembly process planning

    NASA Astrophysics Data System (ADS)

    Kongar, Elif; Gupta, Surendra M.

    2002-02-01

    When a product reaches its end of life, there are several options available for processing it including reuse, remanufacturing, recycling, and disposing (the least desirable option). In almost all cases, a certain level of disassembly may be necessary. Thus, finding an optimal (or near optimal) disassembly sequence is crucial to increasing the efficiency of the process. Disassembly operations are labor intensive, can be costly, have unique characteristics and cannot be considered as reverse of assembly operations. Since the complexity of determining the best disassembly sequence increases with the increase in the number of parts of the product, it is extremely crucial that an efficient methodology for disassembly process planning be developed. In this paper, we present a genetic algorithm for disassembly process planning. A case example is considered to demonstrate the functionality of the algorithm.

  1. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  2. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  3. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  4. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  5. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  6. Totally parallel multilevel algorithms for sparse elliptic systems

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1989-01-01

    The fastest known algorithms for the solution of a large elliptic boundary value problem on a massively parallel hypercube all require O(log(n)) floating point operations and O(log(n)) distance-1 communications, if massively parallel is defined to mean a number of processors proportional to the size n of the problem. The Totally Parallel Multilevel Algorithm (TPMA) that has, as special cases, four of these fast algorithms is described. These four algorithms are Parallel Superconvergent Multigrid (PSMG), Robust Multigrid, the Fast Fourier Transformation (FFT) based Spectral Algorithm, and Parallel Cyclic Reduction. The algorithm TPMA, when described recursively, has four steps: (1) project to a collection of interlaced, coarser problems at the next lower level; (2) apply TPMA, recursively, to each of these lower level problems, solving directly at the lowest level; (3) interpolate these approximate solutions to the finer grid, and to verage them to form an approximate solution on this grid; and (4) refine this approximate solution with a defect-correction step, using a local approximate inverse. Choice of the projection operator (P), the interpolation operator (Q), and the smoother (S) determines the class of problems on which TPMA is most effective. There are special cases in which the first three steps produce an exact solution, and the smoother is not needed (e.g., constant coefficient operators).

  7. Dual-mode type algorithms for blind equalization

    NASA Astrophysics Data System (ADS)

    Weerackody, Vijitha; Kassam, Saleem A.

    1994-01-01

    Adaptive channel equalization accomplished without resorting to a training sequence is known as blind equalization. The Godard algorithm and the generalized Sato algorithm are two widely referenced algorithms for blind equalization of a QAM system. These algorithms exhibit very slow convergence rates when compared to algorithms employed in conventional data-aided equalization schemes. In order to speed up the convergence process, these algorithms may be switched over to a decision-directed equalization scheme once the error level is reasonably low. We present a scheme which is capable of operating in two modes: blind equalization mode and a mode similar to the decision-directed equalization mode. In this proposed scheme, the dominant mode of operation changes from the blind equalization mode at higher error levels to the mode similar to the decision-directed equalization mode at lower error levels. Manual switch-over to the decision-directed mode from the blind equalization mode, or vice-versa, is not necessary since transitions between the two modes take place smoothly and automatically.

  8. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  9. The acoustic and perceptual effects of two noise-suppression algorithms.

    PubMed

    Zakis, Justin A; Wise, Christi

    2007-01-01

    Internal noise generated by hearing-aid circuits can be audible and objectionable to aid users, and may lead to the rejection of hearing aids. Two expansion algorithms were developed to suppress internal noise below a threshold level. The multiple-channel algorithm's expansion thresholds followed the 55-dB SPL long-term average speech spectrum, while the single-channel algorithm suppressed sounds below 45 dBA. With the recommended settings in static conditions, the single-channel algorithm provided lower noise levels, which were perceived as quieter by most normal-hearing participants. However, in dynamic conditions "pumping" noises were more noticeable with the single-channel algorithm. For impaired-hearing listeners fitted with the ADRO amplification strategy, both algorithms maintained speech understanding for words in sentences presented at 55 dB SPL in quiet (99.3% correct). Mean sentence reception thresholds in quiet were 39.4, 40.7, and 41.8 dB SPL without noise suppression, and with the single- and multiple-channel algorithms, respectively. The increase in the sentence reception threshold was statistically significant for the multiple-channel algorithm, but not the single-channel algorithm. Thus, both algorithms suppressed noise without affecting the intelligibility of speech presented at 55 dB SPL, with the single-channel algorithm providing marginally greater noise suppression in static conditions, and the multiple-channel algorithm avoiding pumping noises. PMID:17297798

  10. Conservative Patch Algorithm and Mesh Sequencing for PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Abdol-Hamid, K. S.

    2005-01-01

    A mesh-sequencing algorithm and a conservative patched-grid-interface algorithm (hereafter Patch Algorithm ) have been incorporated into the PAB3D code, which is a computer program that solves the Navier-Stokes equations for the simulation of subsonic, transonic, or supersonic flows surrounding an aircraft or other complex aerodynamic shapes. These algorithms are efficient, flexible, and have added tremendously to the capabilities of PAB3D. The mesh-sequencing algorithm makes it possible to perform preliminary computations using only a fraction of the grid cells (provided the original cell count is divisible by an integer) along any grid coordinate axis, independently of the other axes. The patch algorithm addresses another critical need in multi-block grid situation where the cell faces of adjacent grid blocks may not coincide, leading to errors in calculating fluxes of conserved physical quantities across interfaces between the blocks. The patch algorithm, based on the Stokes integral formulation of the applicable conservation laws, effectively matches each of the interfacial cells on one side of the block interface to the corresponding fractional cell area pieces on the other side. This approach is comprehensive and unified such that all interface topology is automatically processed without user intervention. This algorithm is implemented in a preprocessing code that creates a cell-by-cell database that will maintain flux conservation at any level of full or reduced grid density as the user may choose by way of the mesh-sequencing algorithm. These two algorithms have enhanced the numerical accuracy of the code, reduced the time and effort for grid preprocessing, and provided users with the flexibility of performing computations at any desired full or reduced grid resolution to suit their specific computational requirements.

  11. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  12. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  13. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  14. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  15. Efficient bit-level, word-level, and block-level systolic arrays for matrix-matrix multiplication

    SciTech Connect

    De Groot, A.J.; Parker, S.R.; Johansson, E.M.

    1988-02-01

    This paper investigates the mapping of matrix-matrix multiplication onto bit level, word level and block level systolic arrays. Highly efficient and regular bit level, word level and block level systolic arrays are described. Efficiencies of many block level and word level systolic arrays reported in this paper approach 100/percent/, three times the efficiencies of systolic arrays reported previously. Bit level systolic arrays reported in this paper require less computation time than do bit level systolic arrays reported previously and, for special matrices, require less cells. Execution times of block level systolic algorithms on sixty-four-element multiprocessor agree with theory.

  16. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  17. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  18. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  19. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  20. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  1. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  2. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  3. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  4. Fully relativistic lattice Boltzmann algorithm

    SciTech Connect

    Romatschke, P.; Mendoza, M.; Succi, S.

    2011-09-15

    Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.

  5. High-speed CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    El-Guibaly, Fayez; Sabaa, A.

    1996-10-01

    In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.

  6. Forecasting the solar cycle with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Orfila, A.; Ballester, J. L.; Oliver, R.; Alvarez, A.; Tintoré, J.

    2002-04-01

    In the past, it has been postulated that the irregular dynamics of the solar cycle may embed a low order chaotic process (Weiss 1988, 1994; Spiegel 1994) which, if true, implies that the future behaviour of solar activity should be predictable. Here, starting from the historical record of Zürich sunspot numbers, we build a dynamical model of the solar cycle which allows us to make a long-term forecast of its behaviour. Firstly, the deterministic part of the time series has been reconstructed using the Singular Spectrum Analysis and then an evolutionary algorithm (Alvarez et al. 2001), based on Darwinian theories of natural selection and survival and ideally suited for non-linear time series, has been applied. Then, the predictive capability of the algorithm has been tested by comparing the behaviour of solar cycles 19-22 with forecasts made with the algorithm, obtaining results which show reasonable agreement with the known behaviour of those cycles. Next, the forecast of the future behaviour of solar cycle 23 has been performed and the results point out that the level of activity during this cycle will be somewhat smaller than in the two previous ones.

  7. Description of the AILS Alerting Algorithm

    NASA Technical Reports Server (NTRS)

    Samanant, Paul; Jackson, Mike

    2000-01-01

    This document provides a complete description of the Airborne Information for Lateral Spacing (AILS) alerting algorithms. The purpose of AILS is to provide separation assurance between aircraft during simultaneous approaches to closely spaced parallel runways. AILS will allow independent approaches to be flown in such situations where dependent approaches were previously required (typically under Instrument Meteorological Conditions (IMC)). This is achieved by providing multiple levels of alerting for pairs of aircraft that are in parallel approach situations. This document#s scope is comprehensive and covers everything from general overviews, definitions, and concepts down to algorithmic elements and equations. The entire algorithm is presented in complete and detailed pseudo-code format. This can be used by software programmers to program AILS into a software language. Additional supporting information is provided in the form of coordinate frame definitions, data requirements, calling requirements as well as all necessary pre-processing and post-processing requirements. This is important and required information for the implementation of AILS into an analysis, a simulation, or a real-time system.

  8. A VLSI design concept for parallel iterative algorithms

    NASA Astrophysics Data System (ADS)

    Sun, C. C.; Götze, J.

    2009-05-01

    Modern VLSI manufacturing technology has kept shrinking down to the nanoscale level with a very fast trend. Integration with the advanced nano-technology now makes it possible to realize advanced parallel iterative algorithms directly which was almost impossible 10 years ago. In this paper, we want to discuss the influences of evolving VLSI technologies for iterative algorithms and present design strategies from an algorithmic and architectural point of view. Implementing an iterative algorithm on a multiprocessor array, there is a trade-off between the performance/complexity of processors and the load/throughput of interconnects. This is due to the behavior of iterative algorithms. For example, we could simplify the parallel implementation of the iterative algorithm (i.e., processor elements of the multiprocessor array) in any way as long as the convergence is guaranteed. However, the modification of the algorithm (processors) usually increases the number of required iterations which also means that the switch activity of interconnects is increasing. As an example we show that a 25×25 full Jacobi EVD array could be realized into one single FPGA device with the simplified μ-rotation CORDIC architecture.

  9. Algorithms for verbal autopsies: a validation study in Kenyan children.

    PubMed Central

    Quigley, M. A.; Armstrong Schellenberg, J. R.; Snow, R. W.

    1996-01-01

    The verbal autopsy (VA) questionnaire is a widely used method for collecting information on cause-specific mortality where the medical certification of deaths in childhood is incomplete. This paper discusses review by physicians and expert algorithms as approaches to ascribing cause of deaths from the VA questionnaire and proposes an alternative, data-derived approach. In this validation study, the relatives of 295 children who had died in hospital were interviewed using a VA questionnaire. The children were assigned causes of death using data-derived algorithms obtained under logistic regression and using expert algorithms. For most causes of death, the data-derived algorithms and expert algorithms yielded similar levels of diagnostic accuracy. However, a data-derived algorithm for malaria gave a sensitivity of 71% (95% Cl: 58-84%), which was significantly higher than the sensitivity of 47% obtained under an expert algorithm. The need for exploring this and other ways in which the VA technique can be improved are discussed. The implications of less-than-perfect sensitivity and specificity are explored using numerical examples. Misclassification bias should be taken into consideration when planning and evaluating epidemiological studies. PMID:8706229

  10. CORDIC Algorithms: Theory And Extensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc

    1989-11-01

    Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.

  11. Multithreaded Algorithms for Graph Coloring

    SciTech Connect

    Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex

    2012-10-21

    Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.

  12. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  13. Triglyceride level

    MedlinePlus

    ... may also cause swelling of your pancreas (called pancreatitis). The triglyceride level is usually included in a ... lower triglyceride levels may be used to prevent pancreatitis for levels above 500 mg/dL Low triglyceride ...

  14. Retrieval Algorithms for the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.; Gordley, Larry L.

    2009-01-01

    The Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) provided high quality measurements of key middle atmosphere constituents, aerosol characteristics, and temperature for 14 years (1991-2005). This report is an outline of the Level 2 retrieval algorithms, and it also describes the great care that was taken in characterizing the instrument prior to launch and throughout its mission life. It represents an historical record of the techniques used to analyze the data and of the steps that must be considered for the development of a similar experiment for future satellite missions.

  15. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  16. Concepts and algorithms in digital photogrammetry

    NASA Technical Reports Server (NTRS)

    Schenk, T.

    1994-01-01

    Despite much progress in digital photogrammetry, there is still a considerable lack of understanding of theories and methods which would allow a substantial increase in the automation of photogrammetric processes. The purpose of this paper is to raise awareness that the automation problem is one that cannot be solved in a bottom-up fashion by a trial-and-error approach. We present a short overview of concepts and algorithms used in digital photogrammetry. This is followed by a more detailed presentation of perceptual organization, a typical middle-level task.

  17. Inclusive jet production using the kt algorithm

    SciTech Connect

    Norniella, Olga; /Barcelona, IFAE

    2006-05-01

    Results on inclusive jet production using the k{sub T} algorithm in proton-antiproton collisions at {radical}s = 1.96 TeV are presented, based on 1 fb{sup -1} of CDF Run II data. The measurements are carried out for jets with p{sub T}{sup jet} > 54 GeV/c in five different jet rapidity regions up to |y{sub jet}| = 2.1. The measured cross sections are corrected to the hadron level and compared to next-to-leading order perturbative QCD predictions (NLO pQCD).

  18. Monitoring and Commissioning Verification Algorithms for CHP Systems

    SciTech Connect

    Brambley, Michael R.; Katipamula, Srinivas; Jiang, Wei

    2008-03-31

    This document provides the algorithms for CHP system performance monitoring and commissioning verification (CxV). It starts by presenting system-level and component-level performance metrics, followed by descriptions of algorithms for performance monitoring and commissioning verification, using the metric presented earlier. Verification of commissioning is accomplished essentially by comparing actual measured performance to benchmarks for performance provided by the system integrator and/or component manufacturers. The results of these comparisons are then automatically interpreted to provide conclusions regarding whether the CHP system and its components have been properly commissioned and where problems are found, guidance is provided for corrections. A discussion of uncertainty handling is then provided, which is followed by a description of how simulations models can be used to generate data for testing the algorithms. A model is described for simulating a CHP system consisting of a micro-turbine, an exhaust-gas heat recovery unit that produces hot water, a absorption chiller and a cooling tower. The process for using this model for generating data for testing the algorithms for a selected set of faults is described. The next section applies the algorithms developed to CHP laboratory and field data to illustrate their use. The report then concludes with a discussion of the need for laboratory testing of the algorithms on a physical CHP systems and identification of the recommended next steps.

  19. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  20. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  1. Fast algorithm of low power image reformation for OLED display

    NASA Astrophysics Data System (ADS)

    Lee, Myungwoo; Kim, Taewhan

    2014-04-01

    We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.

  2. New packet scheduling algorithm in wireless CDMA data networks

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Gao, Zhuo; Li, Shaoqian; Li, Lemin

    2002-08-01

    The future 3G/4G wireless communication systems will provide internet access for mobile users. Packet scheduling algorithms are essential for QoS of diversified data traffics and efficient utilization of radio spectrum.This paper firstly presents a new packet scheduling algorithm DSTTF under the assumption of continuous transmission rates and scheduling intervals for CDMA data networks . Then considering the constraints of discrete transmission rates and fixed scheduling intervals imposed by the practical system, P-DSTTF, a modified version of DSTTF, is brought forward. Both scheduling algorithms take into consideration of channel condition, packet size and traffic delay bounds. The extensive simulation results demonstrate that the proposed scheduling algorithms are superior to some typical ones in current research. In addition, both static and dynamic wireless channel model of multi-level link capacity are established. These channel models sketch better the characterizations of wireless channel than two state Markov model widely adopted by the current literature.

  3. Genetic-algorithm-based tri-state neural networks

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Chen, Wen-Gong; Horng, Ji-Bin

    2002-09-01

    A new method, using genetic algorithms, for constructing a tri-state neural network is presented. The global searching features of the genetic algorithms are adopted to help us easily find the interconnection weight matrix of a bipolar neural network. The construction method is based on the biological nervous systems, which evolve the parameters encoded in genes. Taking the advantages of conventional (binary) genetic algorithms, a two-level chromosome structure is proposed for training the tri-state neural network. A Matlab program is developed for simulating the network performances. The results show that the proposed genetic algorithms method not only has the features of accurate of constructing the interconnection weight matrix, but also has better network performance.

  4. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  5. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance. PMID:27066339

  6. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  7. Optimization of multilayer cylindrical cloaks using genetic algorithms and NEWUOA

    NASA Astrophysics Data System (ADS)

    Sakr, Ahmed A.; Abdelmageed, Alaa K.

    2016-06-01

    The problem of minimizing the scattering from a multilayer cylindrical cloak is studied. Both TM and TE polarizations are considered. A two-stage optimization procedure using genetic algorithms and NEWUOA (new unconstrained optimization algorithm) is adopted for realizing the cloak using homogeneous isotropic layers. The layers are arranged such that they follow a repeated pattern of alternating DPS and DNG materials. The results show that a good level of invisibility can be realized using a reasonable number of layers. Maintaining the cloak performance over a finite range of frequencies without sacrificing the level of invisibility is achieved.

  8. A comparative analysis of biclustering algorithms for gene expression data.

    PubMed

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V

    2013-05-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  9. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    PubMed

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms. PMID:25014985

  10. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  11. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  12. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  13. Analysis and applications of a general boresight algorithm for the DSS-13 beam waveguide antenna

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1992-01-01

    A general antenna beam boresight algorithm is presented. Equations for axial pointing error, peak received signal level, and antenna half-power beamwidth are given. A pointing error variance equation is derived that illustrates the dependence of the measurement estimation performance on the various algorithm inputs, including RF signal level uncertainty. Plots showing pointing error uncertainty as function of algorithm inputs are presented. Insight gained from the performance analysis is discussed in terms of its application to the areas of antenna controller and receiver interfacing, pointing error compensation, and antenna calibrations. Current and planned applications of the boresight algorithm, including its role in the upcoming Ka-band downlink experiment (KABLE), are highlighted.

  14. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  15. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  16. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  17. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  18. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  19. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  20. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  1. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  2. The CMS high level trigger

    NASA Astrophysics Data System (ADS)

    Gori, Valentina

    2014-05-01

    The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

  3. The CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Trocino, Daniele

    2014-06-01

    The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented in custom-designed electronics, and the High-Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running with the available computing power, the sustainable output rate, and the selection efficiency. We present the performance of the main triggers used during the 2012 data taking, ranging from simple single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We discuss the optimisation of the trigger and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

  4. Algorithm for Rapid Searching Among Star-Catalog Entries

    NASA Technical Reports Server (NTRS)

    Liebe, Carl Christian

    2006-01-01

    An algorithm searches a star catalog to identify guide stars within the field of view of a telescope or camera. The algorithm is fast: the number of computations needed to perform the search is approximately proportional to the logarithm of the number of stars in the catalog. The algorithm requires the prior organization of the star catalog into a hierarchy utilizing independent spherical coverings (see figure), such that each successively higher level contains fewer elements. In the lowest and most numerous level of the hierarchy, the elements are individual stars in the star catalog. The next higher level contains a spherical covering (a constellation of n points on a sphere that minimizes the maximum distance of any point on the sphere from the closest one of the n points), the next higher level contains a smaller spherical covering, and so forth, ending at the highest level, which contains one element representing the point of entry into the search structure. With necessary exceptions at the lowest and highest levels, each element at each level is labeled in terms of the element to which it is linked in the next higher level and the first element to which it is linked in the next lower level. Each element is also labeled in terms of (1) its coordinates on the celestial sphere and (2) the largest angular distance to any element in any lower level in the hierarchy. The elements at all levels of the hierarchy are numbered on a single list, such that the elements of each constellation at each level are numbered consecutively. The algorithm is recursive. The input required to start the algorithm comprises the coordinates of a point on the celestial sphere. Attention is then focused on individual elements of the hierarchy, starting from the topmost one, as follows: The angle between the input point and the element under consideration is calculated. If the calculated angle is larger than the sum of (1) the predetermined angle to the most distant element plus (2) the

  5. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi; Vanrosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.

  6. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi Henderson; Van Rosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two- and three-dimensional model problems are presented, together with a two level analysis explaining these results.

  7. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  8. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  9. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  10. Computational Algorithms for Device-Circuit Coupling

    SciTech Connect

    KEITER, ERIC R.; HUTCHINSON, SCOTT A.; HOEKSTRA, ROBERT J.; RANKIN, ERIC LAMONT; RUSSO, THOMAS V.; WATERS, LON J.

    2003-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

  11. Automated DNA Base Pair Calling Algorithm

    1999-07-07

    The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less

  12. Algorithmic requirements for swarm intelligence in differently coupled collective systems.

    PubMed

    Stradner, Jürgen; Thenius, Ronald; Zahadat, Payam; Hamann, Heiko; Crailsheim, Karl; Schmickl, Thomas

    2013-05-01

    Swarm systems are based on intermediate connectivity between individuals and dynamic neighborhoods. In natural swarms self-organizing principles bring their agents to that favorable level of connectivity. They serve as interesting sources of inspiration for control algorithms in swarm robotics on the one hand, and in modular robotics on the other hand. In this paper we demonstrate and compare a set of bio-inspired algorithms that are used to control the collective behavior of swarms and modular systems: BEECLUST, AHHS (hormone controllers), FGRN (fractal genetic regulatory networks), and VE (virtual embryogenesis). We demonstrate how such bio-inspired control paradigms bring their host systems to a level of intermediate connectivity, what delivers sufficient robustness to these systems for collective decentralized control. In parallel, these algorithms allow sufficient volatility of shared information within these systems to help preventing local optima and deadlock situations, this way keeping those systems flexible and adaptive in dynamic non-deterministic environments. PMID:23805030

  13. Shape determination and placement algorithms for hierarchical integrated circuit layout

    NASA Astrophysics Data System (ADS)

    Slutz, E. A.

    Algorithms for the automatic layout of integrated circuits are presented. The algorithms use a hierarchical decomposition of the circuit structure. Since this reduces the complexity of the design, it is an aid to the designer as well as the means of making possible the automated approach to layout. The layout method consists of two phases: a top-down phase during which the shapes of the components at each level are determined, followed by a bottomup phase where a final placement and routing for each level is computed. The data structure used to model the chip surface is central to the algorithms. This data structure is presented along with the alternative structures. Four basic operations of adding components, deleting components, sizing, and building the structure for a given placement are described. A file format for capturing integrated circuit design information is also described.

  14. Algorithmic requirements for swarm intelligence in differently coupled collective systems

    PubMed Central

    Stradner, Jürgen; Thenius, Ronald; Zahadat, Payam; Hamann, Heiko; Crailsheim, Karl; Schmickl, Thomas

    2013-01-01

    Swarm systems are based on intermediate connectivity between individuals and dynamic neighborhoods. In natural swarms self-organizing principles bring their agents to that favorable level of connectivity. They serve as interesting sources of inspiration for control algorithms in swarm robotics on the one hand, and in modular robotics on the other hand. In this paper we demonstrate and compare a set of bio-inspired algorithms that are used to control the collective behavior of swarms and modular systems: BEECLUST, AHHS (hormone controllers), FGRN (fractal genetic regulatory networks), and VE (virtual embryogenesis). We demonstrate how such bio-inspired control paradigms bring their host systems to a level of intermediate connectivity, what delivers sufficient robustness to these systems for collective decentralized control. In parallel, these algorithms allow sufficient volatility of shared information within these systems to help preventing local optima and deadlock situations, this way keeping those systems flexible and adaptive in dynamic non-deterministic environments. PMID:23805030

  15. Analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, J.

    1988-10-01

    We prove some new estimates for the convergence of multigrid algorithms applied to nonsymmetric and indefinite elliptic boundary value problems. We provide results for the so-called 'symmetric' multigrid schemes. We show that for the variable V-script-cycle and the W-script-cycle schemes, multigrid algorithms with any amount of smoothing on the finest grid converge at a rate that is independent of the number of levels or unknowns, provided that the initial grid is sufficiently fine. We show that the V-script-cycle algorithm also converges (under appropriate assumptions on the coarsest grid) but at a rate which may deteriorate as the number of levels increases. This deterioration for the V-script-cycle may occur even in the case of full elliptic regularity. Finally, the results of numerical experiments are given which illustrate the convergence behavior suggested by the theory.

  16. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  17. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  18. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308

  19. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  20. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  1. Squint mode SAR processing algorithms

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Jin, M.; Curlander, J. C.

    1989-01-01

    The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.

  2. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  3. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  4. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  5. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  6. Cortisol level

    MedlinePlus

    ... enable JavaScript. The cortisol blood test measures the level of cortisol in the blood. Cortisol is a ... in the morning. This is important, because cortisol level varies throughout the day. You may be asked ...

  7. Triglyceride level

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/003493.htm Triglyceride level To use the sharing features on this page, please enable JavaScript. The triglyceride level is a blood test to measure the amount ...

  8. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  9. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  10. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  11. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  12. Auto-focus algorithm based on statistical blur estimation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Prajit

    2013-03-01

    Conventional auto-focus techniques in movable-lens camera systems use a measure of image sharpness to determine the lens position that brings the scene into focus. This paper presents a novel wavelet-domain approach to determine the position of best focus. In contrast to current techniques, the proposed algorithm estimates the level of blur in the captured image at each lens position. Image blur is quantified by fitting a Generalized Gaussian Density (GGD) curve to a high-pass version of the image using second-order statistics. The system then moves the lens to the position that yields the least measure of image blur. The algorithm overcomes shortcomings of sharpness-based approaches, namely, the application of large band-pass filters, sensitivity to image noise and need for calibration under different imaging conditions. Since noise has no effect on the proposed blur metric, the algorithm works with a short filter and is devoid of parameter tuning. Furthermore, the algorithm could be simplified to use a single high-pass filter to reduce complexity. These advantages, along with the optimization presented in the paper, make the proposed algorithm very attractive for hardware implementation on cell phones. Experiments prove that the algorithm performs well in the presence of noise as well as resolution and data scaling.

  13. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  14. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  15. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring

    PubMed Central

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-01-01

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts. PMID:27241172

  16. PSLQ: An Algorithm to Discover Integer Relations

    SciTech Connect

    Bailey, David H.; Borwein, J. M.

    2009-04-03

    Let x = (x{sub 1}, x{sub 2} {hor_ellipsis}, x{sub n}) be a vector of real or complex numbers. x is said to possess an integer relation if there exist integers a{sub i}, not all zero, such that a{sub 1}x{sub 1} + a{sub 2}x{sub 2} + {hor_ellipsis} + a{sub n}x{sub n} = 0. By an integer relation algorithm, we mean a practical computational scheme that can recover the vector of integers ai, if it exists, or can produce bounds within which no integer relation exists. As we will see in the examples below, an integer relation algorithm can be used to recognize a computed constant in terms of a formula involving known constants, or to discover an underlying relation between quantities that can be computed to high precision. At the present time, the most effective algorithm for integer relation detection is the 'PSLQ' algorithm of mathematician-sculptor Helaman Ferguson [10, 4]. Some efficient 'multi-level' implementations of PSLQ, as well as a variant of PSLQ that is well-suited for highly parallel computer systems, are given in [4]. PSLQ constructs a sequence of integer-valued matrices B{sub n} that reduces the vector y = xB{sub n}, until either the relation is found (as one of the columns of B{sub n}), or else precision is exhausted. At the same time, PSLQ generates a steadily growing bound on the size of any possible relation. When a relation is found, the size of smallest entry of the vector y abruptly drops to roughly 'epsilon' (i.e. 10{sup -p}, where p is the number of digits of precision). The size of this drop can be viewed as a 'confidence level' that the relation is real and not merely a numerical artifact - a drop of 20 or more orders of magnitude almost always indicates a real relation. Very high precision arithmetic must be used in PSLQ. If one wishes to recover a relation of length n, with coefficients of maximum size d digits, then the input vector x must be specified to at least nd digits, and one must employ nd-digit floating-point arithmetic. Maple and

  17. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  18. Decoding the brain's algorithm for categorization from its neural implementation.

    PubMed

    Mack, Michael L; Preston, Alison R; Love, Bradley C

    2013-10-21

    Acts of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity [1]. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level [2-4]. Despite the great potential for theoretical advances, few studies of cognition bridge levels of analysis. For example, formal cognitive models of category decisions accurately predict human decision making [5, 6], but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain [7-9]. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations that support category decisions to evaluate two dominant, and opposing, models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar [5] rather than prototype theory [10, 11]. Representations of individual experiences, not the abstraction of experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition. PMID:24094852

  19. 3D-design exploration of CNN algorithms

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2011-05-01

    Multi-dimensional algorithms are hard to implement on classical platforms. Pipelining may exploit instruction-level parallelism, but not in the presence of simultaneous data; threads optimize only within the given restrictions. Tiled architectures do add a dimension to the solution space. With locally a large register store, data parallelism is handled, but only to a dimension. 3-D technologies are meant to add a dimension in the realization. Applied on the device level, it makes each computational node smaller. The interconnections become shorter and hence the network will be condensed. Such advantages will be easily lost at higher implementation levels unless 3-D technologies as multi-cores or chip stacking are also introduced. 3-D technologies scale in space, where (partial) reconfiguration scales in time. The optimal selection over the various implementation levels is algorithm dependent. The paper discusses such principles while applied on the scaling of cellular neural networks (CNN). It illustrates how stacking of reconfigurable chips supports many algorithmic requirements in a defect-insensitive manner. Further the paper explores the potential of chip stacking for multi-modal implementations in a reconfigurable approach to heterogeneous architectures for algorithm domains.

  20. Rapid algorithm prototyping and implementation for power quality measurement

    NASA Astrophysics Data System (ADS)

    Kołek, Krzysztof; Piątek, Krzysztof

    2015-12-01

    This article presents a Model-Based Design (MBD) approach to rapidly implement power quality (PQ) metering algorithms. Power supply quality is a very important aspect of modern power systems and will become even more important in future smart grids. In this case, maintaining the PQ parameters at the desired level will require efficient implementation methods of the metering algorithms. Currently, the development of new, advanced PQ metering algorithms requires new hardware with adequate computational capability and time intensive, cost-ineffective manual implementations. An alternative, considered here, is an MBD approach. The MBD approach focuses on the modelling and validation of the model by simulation, which is well-supported by a Computer-Aided Engineering (CAE) packages. This paper presents two algorithms utilized in modern PQ meters: a phase-locked loop based on an Enhanced Phase Locked Loop (EPLL), and the flicker measurement according to the IEC 61000-4-15 standard. The algorithms were chosen because of their complexity and non-trivial development. They were first modelled in the MATLAB/Simulink package, then tested and validated in a simulation environment. The models, in the form of Simulink diagrams, were next used to automatically generate C code. The code was compiled and executed in real-time on the Zynq Xilinx platform that combines a reconfigurable Field Programmable Gate Array (FPGA) with a dual-core processor. The MBD development of PQ algorithms, automatic code generation, and compilation form a rapid algorithm prototyping and implementation path for PQ measurements. The main advantage of this approach is the ability to focus on the design, validation, and testing stages while skipping over implementation issues. The code generation process renders production-ready code that can be easily used on the target hardware. This is especially important when standards for PQ measurement are in constant development, and the PQ issues in emerging smart

  1. The prototype SMOS soil moisture Algorithm

    NASA Astrophysics Data System (ADS)

    Kerr, Y.; Waldteufel, P.; Richaume, P.; Cabot, F.; Wigneron, J. P.; Ferrazzoli, P.; Mahmoodi, A.; Delwart, S.

    2009-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission is ESA's (European Space Agency ) second Earth Explorer Opportunity mission, to be launched in September 2007. It is a joint programme between ESA CNES (Centre National d'Etudes Spatiales) and CDTI (Centro para el Desarrollo Tecnologico Industrial). SMOS carries a single payload, an L-band 2D interferometric radiometer in the 1400-1427 MHz protected band. This wavelength penetrates well through the atmosphere and hence the instrument probes the Earth surface emissivity. Surface emissivity can then be related to the moisture content in the first few centimeters of soil, and, after some surface roughness and temperature corrections, to the sea surface salinity over ocean. In order to prepare the data use and dissemination, the ground segment will produce level 1 and 2 data. Level 1 will consists mainly of angular brightness temperatures while level 2 will consist of geophysical products. In this context, a group of institutes prepared the soil moisture and ocean salinity Algorithm Theoretical Basis documents (ATBD) to be used to produce the operational algorithm. The consortium of institutes preparing the Soil moisture algorithm is led by CESBIO (Centre d'Etudes Spatiales de la BIOsphère) and Service d'Aéronomie and consists of the institutes represented by the authors. The principle of the soil moisture retrieval algorithm is based on an iterative approach which aims at minimizing a cost function given by the sum of the squared weighted differences between measured and modelled brightness temperature (TB) data, for a variety of incidence angles. This is achieved by finding the best suited set of the parameters which drive the direct TB model, e.g. soil moisture (SM) and vegetation characteristics. Despite the simplicity of this principle, the main reason for the complexity of the algorithm is that SMOS "pixels" can correspond to rather large, inhomogeneous surface areas whose contribution to the radiometric

  2. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  3. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  4. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  5. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  6. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  7. Simultaneous stabilization using genetic algorithms

    SciTech Connect

    Benson, R.W.; Schmitendorf, W.E. . Dept. of Mechanical Engineering)

    1991-01-01

    This paper considers the problem of simultaneously stabilizing a set of plants using full state feedback. The problem is converted to a simple optimization problem which is solved by a genetic algorithm. Several examples demonstrate the utility of this method. 14 refs., 8 figs.

  8. Detection Algorithms: FFT vs. KLT

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.

  9. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  10. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  11. Nuclear models and exact algorithms

    NASA Astrophysics Data System (ADS)

    Bes, D. R.; Dobaczewski, J.; Draayer, J. P.; Szymański, Z.

    1992-07-01

    Discussion Group E on Nuclear Models and Exact Algorithms received contributions from the following individuals: L. Egido, S. Frauendorf, F. Iachello, P. Ring, H. Sagawa, W. Satula, N. C. Schmeing, M. Vincent, A. J. Zucker. The report that follows is an attempt by the leaders of the discussion to summarize the presentations and to give an impression of the subject matter.

  12. SMAP's Radar OBP Algorithm Development

    NASA Technical Reports Server (NTRS)

    Le, Charles; Spencer, Michael W.; Veilleux, Louise; Chan, Samuel; He, Yutao; Zheng, Jason; Nguyen, Kayla

    2009-01-01

    An approach for algorithm specifications and development is described for SMAP's radar onboard processor with multi-stage demodulation and decimation bandpass digital filter. Point target simulation is used to verify and validate the filter design with the usual radar performance parameters. Preliminary FPGA implementation is also discussed.

  13. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  14. Quartic Rotation Criteria and Algorithms.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Jennrich, Robert I.

    1988-01-01

    Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)

  15. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  16. Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm

    SciTech Connect

    Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen

    2006-12-01

    Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimization problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.

  17. Coupled cluster algorithms for networks of shared memory parallel processors

    NASA Astrophysics Data System (ADS)

    Bentz, Jonathan L.; Olson, Ryan M.; Gordon, Mark S.; Schmidt, Michael W.; Kendall, Ricky A.

    2007-05-01

    As the popularity of using SMP systems as the building blocks for high performance supercomputers increases, so too increases the need for applications that can utilize the multiple levels of parallelism available in clusters of SMPs. This paper presents a dual-layer distributed algorithm, using both shared-memory and distributed-memory techniques to parallelize a very important algorithm (often called the "gold standard") used in computational chemistry, the single and double excitation coupled cluster method with perturbative triples, i.e. CCSD(T). The algorithm is presented within the framework of the GAMESS [M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, J.A. Montgomery, General atomic and molecular electronic structure system, J. Comput. Chem. 14 (1993) 1347-1363]. (General Atomic and Molecular Electronic Structure System) program suite and the Distributed Data Interface [M.W. Schmidt, G.D. Fletcher, B.M. Bode, M.S. Gordon, The distributed data interface in GAMESS, Comput. Phys. Comm. 128 (2000) 190]. (DDI), however, the essential features of the algorithm (data distribution, load-balancing and communication overhead) can be applied to more general computational problems. Timing and performance data for our dual-level algorithm is presented on several large-scale clusters of SMPs.

  18. Efficient Wear Leveling in NAND Flash Memory

    NASA Astrophysics Data System (ADS)

    Chang, Yuan-Hao; Chang, Li-Pin

    In the recent years, flash storage devices such as solid-state drives (SSDs) and flash cards have become a popular choice for the replacement of hard disk drives, especially in the applications of mobile computing devices and consumer electronics. However, the physical constraints of flash memory pose a lifetime limitation on these storage devices. New technologies for ultra-high density flash memory such as multilevel-cell (MLC) flash further degrade flash endurance and worsen this lifetime concern. As a result, flash storage devices may experience a unexpectedly short lifespan, especially when accessing these devices with high frequencies. In order to enhance the endurance of flash storage device, various wear leveling algorithms are proposed to evenly erase blocks of the flash memory so as to prevent wearing out any block excessively. In this chapter, various existing wear leveling algorithms are investigated to point out their design issues and potential problems. Based on this investigation, two efficient wear leveling algorithms (i.e., the evenness-aware algorithm and dual-pool algorithm) are presented to solve the problems of the existing algorithms with the considerations of the limited computing power and memory space in flash storage devices. The evenness-aware algorithm maintains a bit array to keep track of the distribution of block erases to prevent any cold data from staying in any block for a long period of time. The dual-pool algorithm maintains one hot pool and one cold pool to maintain the blocks that store hot data and cold data, respectively, and the excessively erased blocks in the hot pool are exchanged with the rarely erased blocks in the cold pool to prevent any block from being erased excessively. In this chapter, a series of explanations and analyses shows that these two wear leveling algorithms could evenly distribute block erases to the whole flash memory to enhance the endurance of flash memory.

  19. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  20. Let's Start Leveling about Leveling

    ERIC Educational Resources Information Center

    Glasswell, Kath; Ford, Michael

    2011-01-01

    In this article, the authors propose a revised way of thinking about reading levels, one that promotes a wider and more flexible view of teacher decision making about the use of leveled texts in classrooms. They share five key principles to consider when looking at the use of instruction that involves matching leveled materials with readers.…

  1. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  2. Evaluating and comparing algorithms for respiratory motion prediction.

    PubMed

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-06-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm-which is one of the algorithms currently used in the CyberKnife-is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient's respiratory

  3. Sensitivity Analysis for Hierarchical Models Employing "t" Level-1 Assumptions.

    ERIC Educational Resources Information Center

    Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson

    2002-01-01

    Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of freedom at…

  4. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  5. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  6. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  7. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  8. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  9. AN ALGORITHM FOR PARALLEL SN SWEEPS ON UNSTRUCTURED MESHES

    SciTech Connect

    S. D. PAUTZ

    2000-12-01

    We develop a new algorithm for performing parallel S{sub n} sweeps on unstructured meshes. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with ''normal'' mesh partitionings we have observed nearly linear speedups on up to 126 processors. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, we do not observe any severe asymptotic degradation in the parallel efficiency with modest ({le}100) levels of parallelism. This work is a fundamental step in the development of parallel S{sub n} methods.

  10. Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm

    PubMed Central

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848

  11. A new collage steganographic algorithm using cartoon design

    NASA Astrophysics Data System (ADS)

    Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip

    2014-02-01

    Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.

  12. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  13. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  14. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  15. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  16. A semisimultaneous inversion algorithm for SAGE III

    NASA Astrophysics Data System (ADS)

    Ward, Dale M.

    2002-12-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.

  17. An algorithm for leukaemia immunophenotype pattern recognition.

    PubMed

    Petrovecki, M; Marusić, M; Dezelić, G

    1993-01-01

    Since leukaemia-specific leucocyte antigen has not been identified to date, the immunological diagnosis of leukaemia is achieved through the application of a wide set of monoclonal antibodies specific for surface markers on leukaemic cells. Thus, the interpretation of leukaemia immunophenotype seems to be a mathematically determined comparison of 'what we found' and 'what we know' about it. The objective of this study was to establish an algorithm for transformation of empirical rules into mathematical values to achieve proper decisions. Recognition of leukaemia phenotype was performed by comparison of phenotyping data with reference data, followed by scoring of such comparisons. Systematic scoring resulted in the formation of new numerical variables allocated to each state, whereas a most significant variable was described as a complex measure of compatibility. A system of recognized states was described by mathematical variables measuring the confidence of information systems, i.e. maximal, total and relative entropy. The entire algorithm was derived by matrix algebra and coded in a high-level program language. The list of the states recognized appeared to be especially helpful in differential diagnosis, occasionally pointing to states that had not been in the scientist's mind at the start of the analysis. PMID:8366688

  18. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  19. Combat Air Identification Fusion Algorithm (CAIFA)

    NASA Astrophysics Data System (ADS)

    Butler, C. A.; Baker, Joni E.; Crowe, John A.; Kierstead, David P.; Mauro, Carl A.

    2003-04-01

    The Combat Air Identification Fusion Algorithm (CAIFA), developed by Daniel H. Wagner, Associates, is a prototype, inferential reasoning algorithm for air combat identification. Bayesian reasoning and updating techniques are used in CAIFA to fuse multi-source identification evidence to provide identity estimates-allegiance, nationality, platform type, and intent-of detected airborne objects in the air battle space, enabling positive and rapid Combat Identification (CID) decisions. CAIFA was developed for the Composite Combat Identification (CCID) project under the Office of Naval Research (ONR) Missile Defense (MD) Future Naval Capability (FNC) program. CAIFA processes identification (ID) attribute evidence generated by surveillance sensors and other information sources over time by updating the identity estimate for each target using Bayesian inference. CAIFA exploits the conditional interdependencies of attribute variables by constructing a context-dependent Bayesian Network (BN). This formulation offers a well-established, consistent approach for evidential reasoning, renders manageable the potentially large CID state space, and provides a flexible and extensible representation to accommodate requirements for model reconfiguration/restructuring. CAIFA enables reasoning across and at different levels of the Air Space Taxonomy.

  20. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  1. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  2. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  3. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  4. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  5. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  6. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  7. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  8. A possible hypercomputational quantum algorithm

    NASA Astrophysics Data System (ADS)

    Sicard, Andres; Velez, Mario; Ospina, Juan

    2005-05-01

    The term 'hypermachine' denotes any data processing device (theoretical or that can be implemented) capable of carrying out tasks that cannot be performed by a Turing machine. We present a possible quantum algorithm for a classically non-computable decision problem, Hilbert's tenth problem; more specifically, we present a possible hypercomputation model based on quantum computation. Our algorithm is inspired by the one proposed by Tien D. Kieu, but we have selected the infinite square well instead of the (one-dimensional) simple harmonic oscillator as the underlying physical system. Our model exploits the quantum adiabatic process and the characteristics of the representation of the dynamical Lie algebra su(1,1) associated to the infinite square well.

  9. A Class Of Iterative Thresholding Algorithms For Real-Time Image Segmentation

    NASA Astrophysics Data System (ADS)

    Hassan, M. H.

    1989-03-01

    Thresholding algorithms are developed for segmenting gray-level images under nonuniform illumination. The algorithms are based on learning models generated from recursive digital filters which yield to continuously varying threshold tracking functions. A real-time region growing algorithm, which locates the objects in the image while thresholding, is developed and implemented. The algorithms work in a raster-scan format, thus making them attractive for real-time image segmentation in situations requiring fast data throughput such as robot vision and character recognition.

  10. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  11. A robust jet reconstruction algorithm for high-energy lepton colliders

    NASA Astrophysics Data System (ADS)

    Boronat, M.; Fuster, J.; García, I.; Ros, E.; Vos, M.

    2015-11-01

    We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of t t bar and ZZ production at future linear e+e- colliders (ILC and CLIC) with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e- colliders.

  12. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  13. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  14. Algorithms of NCG geometrical module

    NASA Astrophysics Data System (ADS)

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-01

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  15. Algorithms of NCG geometrical module

    SciTech Connect

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-15

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  16. Computed laminography and reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Que, Jie-Min; Cao, Da-Quan; Zhao, Wei; Tang, Xiao; Sun, Cui-Li; Wang, Yan-Fang; Wei, Cun-Feng; Shi, Rong-Jian; Wei, Long; Yu, Zhong-Qiang; Yan, Yong-Lian

    2012-08-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system.

  17. Efficient algorithms for proximity problems

    SciTech Connect

    Wee, Y.C.

    1989-01-01

    Computational geometry is currently a very active area of research in computer science because of its applications to VLSI design, database retrieval, robotics, pattern recognition, etc. The author studies a number of proximity problems which are fundamental in computational geometry. Optimal or improved sequential and parallel algorithms for these problems are presented. Along the way, some relations among the proximity problems are also established. Chapter 2 presents an O(N log{sup 2} N) time divide-and-conquer algorithm for solving the all pairs geographic nearest neighbors problem (GNN) for a set of N sites in the plane under any L{sub p} metric. Chapter 3 presents an O(N log N) divide-and-conquer algorithm for computing the angle restricted Voronoi diagram for a set of N sites in the plane. Chapter 4 introduces a new data structure for the dynamic version of GNN. Chapter 5 defines a new formalism called the quasi-valid range aggregation. This formalism leads to a new and simple method for reducing non-range query-like problems to range queries and often to orthogonal range queries, with immediate applications to the attracted neighbor and the planar all-pairs nearest neighbors problem. Chapter 6 introduces a new approach for the construction of the Voronoi diagram. Using this approach, we design an O(log N) time O (N) processor algorithm for constructing the Voronoi diagram with L{sub 1} and L. metrics on a CREW PRAM machine. Even though the GNN and the Delaunay triangulation (DT) do not have an inclusion relation, we show, using some range type queries, how to efficiently construct DT from the GNN relations over a constant number of angular ranges.

  18. Algorithm Helps Monitor Engine Operation

    NASA Technical Reports Server (NTRS)

    Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.

    1995-01-01

    Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.

  19. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  20. Algorithm validation using multicolor phantoms.

    PubMed

    Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong

    2012-06-01

    We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077

  1. A novel stochastic optimization algorithm.

    PubMed

    Li, B; Jiang, W

    2000-01-01

    This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742

  2. Control algorithm implementation for a redundant degree of freedom manipulator

    NASA Technical Reports Server (NTRS)

    Cohan, Steve

    1991-01-01

    This project's purpose is to develop and implement control algorithms for a kinematically redundant robotic manipulator. The manipulator is being developed concurrently by Odetics Inc., under internal research and development funding. This SBIR contract supports algorithm conception, development, and simulation, as well as software implementation and integration with the manipulator hardware. The Odetics Dexterous Manipulator is a lightweight, high strength, modular manipulator being developed for space and commercial applications. It has seven fully active degrees of freedom, is electrically powered, and is fully operational in 1 G. The manipulator consists of five self-contained modules. These modules join via simple quick-disconnect couplings and self-mating connectors which allow rapid assembly/disassembly for reconfiguration, transport, or servicing. Each joint incorporates a unique drive train design which provides zero backlash operation, is insensitive to wear, and is single fault tolerant to motor or servo amplifier failure. The sensing system is also designed to be single fault tolerant. Although the initial prototype is not space qualified, the design is well-suited to meeting space qualification requirements. The control algorithm design approach is to develop a hierarchical system with well defined access and interfaces at each level. The high level endpoint/configuration control algorithm transforms manipulator endpoint position/orientation commands to joint angle commands, providing task space motion. At the same time, the kinematic redundancy is resolved by controlling the configuration (pose) of the manipulator, using several different optimizing criteria. The center level of the hierarchy servos the joints to their commanded trajectories using both linear feedback and model-based nonlinear control techniques. The lowest control level uses sensed joint torque to close torque servo loops, with the goal of improving the manipulator dynamic behavior

  3. Comparative study of heart sound localization algorithms

    NASA Astrophysics Data System (ADS)

    Moukadem, A.; Dieterlen, A.; Hueber, N.; Brandt, C.; Raymond, P.

    2011-05-01

    The purpose of this document is to present a comparative study of five algorithms of heart sound localization, one of which, is a method based on radial basis function networks applied in a novel approach. The advantages and disadvantages of each method are evaluated according to a data base of 50 subjects in which there are 25 healthy subjects selected from the University Hospital of Strasbourg (HUS) and from theMARS500 project (Moscow) and 25 subjects with cardiac pathologies selected from the HUS. This study is made under the control of an experienced cardiologist. The performance of each method is evaluated by calculating the area under a receiver operating curve (AUC) and the robustness is shown against different levels of additive white Gaussian noise.

  4. Algorithms of whisker-mediated touch perception.

    PubMed

    Maravall, Miguel; Diamond, Mathew E

    2014-04-01

    Comparison of the functional organization of sensory modalities can reveal the specialized mechanisms unique to each modality as well as processing algorithms that are common across modalities. Here we examine the rodent whisker system. The whisker's mechanical properties shape the forces transmitted to specialized receptors. The sensory and motor systems are intimately interconnected, giving rise to two forms of sensation: generative and receptive. The sensory pathway is a test bed for fundamental concepts in computation and coding: hierarchical feature detection, sparseness, adaptive representations, and population coding. The central processing of signals can be considered a sequence of filters. At the level of cortex, neurons represent object features by a coordinated population code which encompasses cells with heterogeneous properties. PMID:24549178

  5. Stroke volume optimization: the new hemodynamic algorithm.

    PubMed

    Johnson, Alexander; Ahrens, Thomas

    2015-02-01

    Critical care practices have evolved to rely more on physical assessments for monitoring cardiac output and evaluating fluid volume status because these assessments are less invasive and more convenient to use than is a pulmonary artery catheter. Despite this trend, level of consciousness, central venous pressure, urine output, heart rate, and blood pressure remain assessments that are slow to be changed, potentially misleading, and often manifested as late indications of decreased cardiac output. The hemodynamic optimization strategy called stroke volume optimization might provide a proactive guide for clinicians to optimize a patient's status before late indications of a worsening condition occur. The evidence supporting use of the stroke volume optimization algorithm to treat hypovolemia is increasing. Many of the cardiac output monitor technologies today measure stroke volume, as well as the parameters that comprise stroke volume: preload, afterload, and contractility. PMID:25639574

  6. Evolution of music score watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Busch, Christoph; Nesi, Paolo; Schmucker, Martin; Spinu, Marius B.

    2002-04-01

    Content protection for multimedia data is widely recognized especially for data types that are frequently distributed, sold or shared using the Internet. Particularly music industry dealing with audio files realized the necessity for content protection. Distribution of music sheets will face the same problems. Digital watermarking techniques provide a certain level of protection for these music sheets. But classical raster-oriented watermarking algorithms for images suffer several drawbacks when directly applied to image representations of music sheets. Therefore new solutions have been developed which are designed regarding the content of the music sheets. In Comparison to other media types the development for watermarking of music scores is a rather young art. The paper reviews the evolution of the early approaches and describes the current state of the art in the field.

  7. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  8. The strobe algorithms for multi-source warehouse consistency

    SciTech Connect

    Zhuge, Yue; Garcia-Molina, H.; Wiener, J.L.

    1996-12-31

    A warehouse is a data repository containing integrated information for efficient querying and analysis. Maintaining the consistency of warehouse data is challenging, especially if the data sources are autonomous and views of the data at the warehouse span multiple sources. Transactions containing multiple updates at one or more sources, e.g., batch updates, complicate the consistency problem. In this paper we identify and discuss three fundamental transaction processing scenarios for data warehousing. We define four levels of consistency for warehouse data and present a new family of algorithms, the Strobe family, that maintain consistency as the warehouse is updated, under the various warehousing scenarios. All of the algorithms are incremental and can handle a continuous and overlapping stream of updates from the sources. Our implementation shows that the algorithms are practical and realistic choices for a wide variety of update scenarios.

  9. Virus evolutionary genetic algorithm for task collaboration of logistics distribution

    NASA Astrophysics Data System (ADS)

    Ning, Fanghua; Chen, Zichen; Xiong, Li

    2005-12-01

    In order to achieve JIT (Just-In-Time) level and clients' maximum satisfaction in logistics collaboration, a Virus Evolutionary Genetic Algorithm (VEGA) was put forward under double constraints of logistics resource and operation sequence. Based on mathematic description of a multiple objective function, the algorithm was designed to schedule logistics tasks with different due dates and allocate them to network members. By introducing a penalty item, make span and customers' satisfaction were expressed in fitness function. And a dynamic adaptive probability of infection was used to improve performance of local search. Compared to standard Genetic Algorithm (GA), experimental result illustrates the performance superiority of VEGA. So the VEGA can provide a powerful decision-making technique for optimizing resource configuration in logistics network.

  10. Astronomical observation tasks short-term scheduling using PDDS algorithm

    NASA Astrophysics Data System (ADS)

    Kornilov, M. V.

    2016-07-01

    A concept of the ground-based optical astronomical observation efficiency is considered in this paper. We believe that a telescope efficiency can be increased by properly allocating observation tasks with respect to the current environment state and probability to obtain the data with required properties under the current conditions. An online observations scheduling is assumed to be an essential part for raising the efficiency. The short-term online scheduling is treated as the discrete optimisation problems which are stated using several abstraction levels. The optimisation problems are solved using the parallel depth-bounded discrepancy search (PDDS) algorithm by Moisan et al. (2014). Some aspects of the algorithm performance are discussed. The presented algorithm is a core of open-source chelyabinsk C++ library which is planned to be used at 2.5 m telescope of Sternberg Astronomical Institute of Lomonosov Moscow State University.

  11. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  12. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  13. A fast algorithm for sparse matrix computations related to inversion

    SciTech Connect

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  14. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  15. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  16. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  17. SCORPIUS algorithm benchmarks on the image understanding architecture machine

    NASA Astrophysics Data System (ADS)

    Bogdanowicz, Julius F.; Nash, J. Gregory; Shu, David B.

    1992-04-01

    Many Hughes tactical and strategic programs need high performance image processing. For example, photo-interpretation applications can require up to four orders of magnitude speedup over conventional computer architectures. Therefore, parallel processing systems are needed to help close the processing gap. Vision applications can usually be decomposed into three levels of processing called high, intermediate, and low level vision. Each processing level typically requires different types of numeric/symbolic computation, processing task granularities, and communications bandwidths. No parallel processing system is commercially available that is optimized for the entire range of computations. To meet these processing challenges, the image understanding architecture (IUA) has been developed by Hughes in collaboration with the University of Massachusetts. The IUA is a heterogeneous, hierarchical, associative parallel processor that is organized in three levels corresponding to the vision problem. Its lowest level consists of a large content addressable array parallel processor. This array of 'per pixel' bit serial processors is used for fixed point, low level numeric, and symbolic computations. The middle level is an interface communications array processor (ICAP). ICAP is an array of digital signal processing chips from TI TMS320Cx line, used for high speed number crunching. The highest level is the symbolic processing array. It is an array of general purpose microprocessors in which the artificial intelligence content of the image understanding software resides. A set of benchmarks from the DARPA/ORD sponsored SCORPIUS program were developed using the IUA. The set of algorithms included low level image processing as well as high level matching algorithms. Benchmark performance on the second generation IUA hardware is over four orders of magnitude faster than equivalent algorithms implemented on a DEC VAX 8650. The first generation hardware is operational. Development

  18. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  19. On the Multilevel Solution Algorithm for Markov Chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  20. Fast Particle Pair Detection Algorithms for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Iwai, T.; Hong, C.-W.; Greil, P.

    New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.