#### Sample records for algorithm numerical results

1. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

2016-10-01

We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

2. Trees, bialgebras and intrinsic numerical algorithms

NASA Technical Reports Server (NTRS)

Crouch, Peter; Grossman, Robert; Larson, Richard

1990-01-01

Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

3. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

DOE PAGES

LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

2015-12-14

Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

4. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

SciTech Connect

LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

2015-12-14

Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

5. Numerical Algorithms Based on Biorthogonal Wavelets

NASA Technical Reports Server (NTRS)

Ponenti, Pj.; Liandrat, J.

1996-01-01

Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

6. Numerical taxonomy on data: Experimental results

SciTech Connect

Cohen, J.; Farach, M.

1997-12-01

The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

7. Stochastic Formal Correctness of Numerical Algorithms

NASA Technical Reports Server (NTRS)

Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

2009-01-01

We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

8. Numerical Algorithms and Parallel Tasking.

DTIC Science & Technology

1984-07-01

34 Principal Investigator, Virginia Klema, Research Staff, George Cybenko and Elizabeth Ducot . During the period, May 15, 1983 through May 14, 1984...Virginia Klema and Elizabeth Ducot have been supported for four months, and George Cybenko has been supported for one month. During this time system...algorithms or applications is the responsibility of the user. Virginia Klema and Elizabeth Ducot presented a description of the concurrent computing

9. Numerical linear algebra algorithms and software

Dongarra, Jack J.; Eijkhout, Victor

2000-11-01

The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra - in particular, the solution of linear systems of equations - lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

10. Numerical comparison of Kalman filter algorithms - Orbit determination case study

NASA Technical Reports Server (NTRS)

Bierman, G. J.; Thornton, C. L.

1977-01-01

Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

11. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

Choptuik, M. W.

An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

12. Results from Numerical General Relativity

NASA Technical Reports Server (NTRS)

Baker, John G.

2011-01-01

For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.

13. A Numerical Instability in an ADI Algorithm for Gyrokinetics

SciTech Connect

E.A. Belli; G.W. Hammett

2004-12-17

We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

14. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

SciTech Connect

Masalma, Yahya; Jiao, Yu

2010-10-01

We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

15. A hybrid artificial bee colony algorithm for numerical function optimization

Alqattan, Zakaria N.; Abdullah, Rosni

2015-02-01

Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

16. An efficient cuckoo search algorithm for numerical function optimization

Ong, Pauline; Zainuddin, Zarita

2013-04-01

Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

17. A novel bee swarm optimization algorithm for numerical function optimization

Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

2010-10-01

The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

18. Adaptive Numerical Algorithms in Space Weather Modeling

NASA Technical Reports Server (NTRS)

Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

2010-01-01

Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

19. Adaptive numerical algorithms in space weather modeling

Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

2012-02-01

Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

20. Determining the Numerical Stability of Quantum Chemistry Algorithms.

PubMed

Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

2011-08-09

We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided.

1. An algorithm for the numerical solution of linear differential games

SciTech Connect

Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

2001-10-31

A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

2. Research on numerical algorithms for large space structures

NASA Technical Reports Server (NTRS)

Denman, E. D.

1982-01-01

Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

3. Numerical Algorithm for Delta of Asian Option.

PubMed

Zhang, Boxiang; Yu, Yang; Wang, Weiguo

2015-01-01

We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options.

4. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

PubMed

Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

2016-03-01

There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge.

5. New Results in Astrodynamics Using Genetic Algorithms

NASA Technical Reports Server (NTRS)

Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

1998-01-01

Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

6. A Polynomial Time, Numerically Stable Integer Relation Algorithm

NASA Technical Reports Server (NTRS)

Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

1998-01-01

Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

7. A numerical algorithm for endochronic plasticity and comparison with experiment

NASA Technical Reports Server (NTRS)

Valanis, K. C.; Fan, J.

1985-01-01

A numerical algorithm based on the finite element method of analysis of the boundary value problem in a continuum is presented, in the case where the plastic response of the material is given in the context of endochronic plasticity. The relevant constitutive equation is expressed in incremental form and plastic effects are accounted for by the method of an induced pseudo-force in the matrix equations. The results of the analysis are compared with observed values in the case of a plate with two symmetric notches and loaded longitudinally in its own plane. The agreement between theory and experiment is excellent.

8. Research on numerical algorithms for large space structures

NASA Technical Reports Server (NTRS)

Denman, E. D.

1981-01-01

Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

9. Wake Vortex Algorithm Scoring Results

NASA Technical Reports Server (NTRS)

Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

2002-01-01

This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

10. An efficient algorithm for numerical airfoil optimization

NASA Technical Reports Server (NTRS)

Vanderplaats, G. N.

1979-01-01

A new optimization algorithm is presented. The method is based on sequential application of a second-order Taylor's series approximation to the airfoil characteristics. Compared to previous methods, design efficiency improvements of more than a factor of 2 are demonstrated. If multiple optimizations are performed, the efficiency improvements are more dramatic due to the ability of the technique to utilize existing data. The method is demonstrated by application to subsonic and transonic airfoil design but is a general optimization technique and is not limited to a particular application or aerodynamic analysis.

11. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

NASA Technical Reports Server (NTRS)

Thornton, C. L.; Bierman, G. J.

1976-01-01

The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

12. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

NASA Technical Reports Server (NTRS)

Moitra, Stuti; Gatski, Thomas B.

1997-01-01

A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

13. Algorithm-Based Fault Tolerance for Numerical Subroutines

NASA Technical Reports Server (NTRS)

Tumon, Michael; Granat, Robert; Lou, John

2007-01-01

A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

14. Concurrent Computing: Numerical Algorithms and Some Applications.

DTIC Science & Technology

1986-07-15

determinant of the harmonic frequencies This result was obtained via a combination of relationships using classical trigonometric moment theory and...component, the out put management subsystem, is the most problem dependent. Current plans call for the design of basic tools for displaying results which...will be augmented as particular applications are tried. During this same time period, we plan to establish a network linking the project’s concurrent

15. Numerical simulations of catastrophic disruption: Recent results

NASA Technical Reports Server (NTRS)

Benz, W.; Asphaug, E.; Ryan, E. V.

1994-01-01

Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

16. Understanding disordered systems through numerical simulation and algorithm development

Sweeney, Sean Michael

Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

17. Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study

NASA Technical Reports Server (NTRS)

Bierman, G. J.; Thornton, C. L.

1976-01-01

Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

18. Multiresolution representation and numerical algorithms: A brief review

NASA Technical Reports Server (NTRS)

Harten, Amiram

1994-01-01

In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

19. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

NASA Technical Reports Server (NTRS)

Abrams, D.; Williams, C.

1999-01-01

We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

20. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

SciTech Connect

Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

2014-02-01

The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

1. Numerical algorithms for the atomistic dopant profiling of semiconductor materials

Aghaei Anvigh, Samira

In this dissertation, we investigate the possibility to use scanning microscopy such as scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) for the "atomistic" dopant profiling of semiconductor materials. For this purpose, we first analyze the discrete effects of random dopant fluctuations (RDF) on SCM and SSRM measurements with nanoscale probes and show that RDF significantly affects the differential capacitance and spreading resistance of the SCM and SSRM measurements if the dimension of the probe is below 50 nm. Then, we develop a mathematical algorithm to compute the spatial coordinates of the ionized impurities in the depletion region using a set of scanning microscopy measurements. The proposed numerical algorithm is then applied to extract the (x, y, z) coordinates of ionized impurities in the depletion region in the case of a few semiconductor materials with different doping configuration. The numerical algorithm developed to solve the above inverse problem is based on the evaluation of doping sensitivity functions of the differential capacitance, which show how sensitive the differential capacitance is to doping variations at different locations. To develop the numerical algorithm we first express the doping sensitivity functions in terms of the Gâteaux derivative of the differential capacitance, use Riesz representation theorem, and then apply a gradient optimization approach to compute the locations of the dopants. The algorithm is verified numerically using 2-D simulations, in which the C-V curves are measured at 3 different locations on the surface of the semiconductor. Although the cases studied in this dissertation are much idealized and, in reality, the C-V measurements are subject to noise and other experimental errors, it is shown that if the differential capacitance is measured precisely, SCM measurements can be potentially used for the "atomistic" profiling of ionized impurities in doped semiconductors.

2. Convergence Results on Iteration Algorithms to Linear Systems

PubMed Central

Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

2014-01-01

In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

3. Path Integrals and Exotic Options:. Methods and Numerical Results

Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

2005-09-01

In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

4. Anisotropic halo model: implementation and numerical results

Sgró, Mario A.; Paz, Dante J.; Merchán, Manuel

2013-07-01

In the present work, we extend the classic halo model for the large-scale matter distribution including a triaxial model for the halo profiles and their alignments. In particular, we derive general expressions for the halo-matter cross-correlation function. In addition, by numerical integration, we obtain instances of the cross-correlation function depending on the directions given by halo shape axes. These functions are called anisotropic cross-correlations. With the aim of comparing our theoretical results with the simulations, we compute averaged anisotropic correlations in cones with their symmetry axis along each shape direction of the centre halo. From these comparisons we characterize and quantify the alignment of dark matter haloes on the Λcold dark matter context by means of the presented anisotropic halo model. Since our model requires multidimensional integral computation we implement a Monte Carlo method on GPU hardware which allows us to increase the precision of the results and it improves the performance of the computation.

5. A direct numerical reconstruction algorithm for the 3D Calderón problem

Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

2011-04-01

In three dimensions Calderón's problem was addressed and solved in theory in the 1980s in a series of papers, but only recently the numerical implementation of the algorithm was initiated. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm is in principle direct and addresses the full non-linear problem immediately. In this paper we will outline the theoretical reconstruction method and describe how the method can be implemented numerically. We will give three different implementations, and compare their performance on a numerical phantom.

6. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

NASA Technical Reports Server (NTRS)

Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

2003-01-01

Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

7. Canonical algorithms for numerical integration of charged particle motion equations

Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

2017-02-01

A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

8. The development and evaluation of numerical algorithms for MIMD computers

NASA Technical Reports Server (NTRS)

Voigt, Robert G.

1990-01-01

Two activities were pursued under this grant. The first was a visitor program to conduct research on numerical algorithms for MIMD computers. The program is summarized in the following attachments. Attachment A - List of Researchers Supported; Attachment B - List of Reports Completed; and Attachment C - Reports. The second activity was a workshop on the Control of fluid Dynamic Systems held on March 28 to 29, 1989. The workshop is summarized in attachments. Attachment D - Workshop Summary; and Attachment E - List of Workshop Participants.

9. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

SciTech Connect

Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

2008-12-04

We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

10. Preliminary results from MERIS Land Algorithm

Gobron, N.; Pinty, B.; Taberner, M.; Melin, F.; Verstraete, M. M.; Widlowski, J.-L.

2003-04-01

This paper presents a first and preliminary evaluation of the performance of the algorithm implemented in the Medium Resolution Imaging Spectrometer (MERIS) ground segment for assessing the status of land surfaces. First, we propose an updated version of the MERIS algorithm itself, which improves the accuracy of the product. Second, we analyze the first results by inter-comparing the MERIS Global Vegetation Index (MGVI) to similar products derived from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) that are generated at the European Commission Joint Research Center (EC-JRC). The first evaluation between MERIS and SeaWiFS derived products is made using data acquired on the same day by both instruments. The results show acceptable agreement and the differences are well understood by radiation transfer model simulations.

11. The Aquarius Salinity Retrieval Algorithm: Early Results

NASA Technical Reports Server (NTRS)

Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

2012-01-01

The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

12. A novel wavefront-based algorithm for numerical simulation of quasi-optical systems

Zhang, Xiaoling; Lou, Zheng; Hu, Jie; Zhou, Kangmin; Zuo, Yingxi; Shi, Shengcai

2016-11-01

A novel wavefront-based algorithm for the beam simulation of both reflective and refractive optics in a complicated quasi-optical system is proposed. The algorithm can be regarded as the extension to the conventional Physical Optics algorithm to handle dielectrics. Internal reflections are modeled in an accurate fashion, and coating and flossy materials can be treated in a straightforward manner. A parallel implementation of the algorithm has been developed and numerical examples show that the algorithm yields sufficient accuracy by comparing with experimental results, while the computational complexity is much less than the full-wave methods. The algorithm offers an alternative approach to the modeling of quasi-optical systems in addition to the Geometrical Optics modeling and full-wave methods.

13. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

2017-03-01

Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

14. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

NASA Technical Reports Server (NTRS)

Smith, Kelly M.

2016-01-01

Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

15. Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method

Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian

2015-06-01

We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.

16. Testing Numerical Dynamo Models Against Experimental Results

Gissinger, C. J.; Fauve, S.; Dormy, E.

2007-12-01

Significant progress has been achieved over the past few years in describing the geomagnetic field using computer models for dynamo action. Such models are so far limited to parameter regimes which are very remote from actual values relevant to the Earth core or any liquid metal (the magnetic Prandtl number is always over estimated by a factor at least 104). While existing models successfully reproduce many of the magnetic observations, it is difficult to assert their validity. The recent success of an experimental homogeneous unconstrained dynamo (VKS) provides a new way to investigate dynamo action in turbulent conducting flows, but it also offers a chance to test the validity of exisiting numerical models. We use a code originaly written for the Geodynamo (Parody) and apply it to the experimental configuration. The direct comparison of simulations and experiments is of great interest to test the predictive value of numerical simulations for dynamo action. These turbulent simulations allow us to approach issues which are very relevant for geophysical dynamos, especially the competition between different magnetic modes and the dynamics of reversals.

17. Fast algorithms for numerical, conservative, and entropy approximations of the Fokker-Planck-Landau equation

SciTech Connect

Buet, C.; Cordier; Degond, P.; Lemou, M.

1997-05-15

We present fast numerical algorithms to solve the nonlinear Fokker-Planck-Landau equation in 3D velocity space. The discretization of the collision operator preserves the properties required by the physical nature of the Fokker-Planck-Landau equation, such as the conservation of mass, momentum, and energy, the decay of the entropy, and the fact that the steady states are Maxwellians. At the end of this paper, we give numerical results illustrating the efficiency of these fast algorithms in terms of accuracy and CPU time. 20 refs., 7 figs.

18. An Adaptive Cauchy Differential Evolution Algorithm for Global Numerical Optimization

PubMed Central

Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

2013-01-01

Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445

19. An adaptive Cauchy differential evolution algorithm for global numerical optimization.

PubMed

Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

2013-01-01

Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems.

20. An efficient numerical algorithm for transverse impact problems

NASA Technical Reports Server (NTRS)

Sankar, B. V.; Sun, C. T.

1985-01-01

Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.

1. Computational Fluid Dynamics. [numerical methods and algorithm development

NASA Technical Reports Server (NTRS)

1992-01-01

This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

2. A numeric comparison of variable selection algorithms for supervised learning

Palombo, G.; Narsky, I.

2009-12-01

Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( http://sourceforge.net/projects/statpatrec/). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ("Add N Remove R") implemented in SPR.

3. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

SciTech Connect

Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

1994-05-01

The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

4. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

2016-09-01

The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

5. Numerical Results of 3-D Modeling of Moon Accumulation

Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr

2014-05-01

6. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

Alfonso, Lester; Zamora, Jose; Cruz, Pedro

2015-04-01

The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

7. Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions

DTIC Science & Technology

2007-12-06

problems studied in this project involve numerically solving partial differential equations with either discontinuous or rapidly changing solutions ...REPORT Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions 14. ABSTRACT 16. SECURITY...discontinuous Galerkin finite element methods, for solving partial differential equations with discontinuous or rapidly changing solutions . Algorithm

8. A fast algorithm for numerical solutions to Fortet's equation

Brumen, Gorazd

2008-10-01

A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

9. Parametric effects of CFL number and artificial smoothing on numerical solutions using implicit approximate factorization algorithm

NASA Technical Reports Server (NTRS)

Daso, E. O.

1986-01-01

An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.

10. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

PubMed

Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

2015-11-11

Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

11. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

PubMed Central

Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

2015-01-01

Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

12. Numerical simulation of three-dimensional unsteady vortex flow using a compact vorticity-velocity algorithm

NASA Technical Reports Server (NTRS)

Gatski, T. B.; Grosch, C. E.; Rose, M. E.; Spall, R. E.

1987-01-01

A numerical algorithm is presented which is used to solve the unsteady, fully three-dimensional, incompressible Navier-Stokes equations in vorticity-velocity variables. A discussion of the discrete approximation scheme is presented as well as the solution method used to solve the resulting algebraic set of difference equations. Second order spatial and temporal accuracy is verified through solution comparisons with exact results obtained for steady three-dimensional stagnation point flow and unsteady axisymmetric vortex spin-up. In addition, results are presented for the problem of unsteady bubble-type vortex breakdown with emphasis on internal bubble dynamics and structure.

13. Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis

Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani

2010-06-01

The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.

14. A method for data handling numerical results in parallel OpenFOAM simulations

SciTech Connect

Anton, Alin; Muntean, Sebastian

2015-12-31

Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

15. Numerical Algorithms and Mathematical Software for Linear Control and Estimation Theory.

DTIC Science & Technology

1985-05-30

RD -R157 525 NUMERICAL ALGORITHMS AND MATHEMATICAL SOFTWJARE FOR i/i LINEAR CONTROL AND EST..U) MASSACHUSETTS INST OF TECH CAMBRIDGE STATISTICS...PERIOD COVERED"~~ "ia--Dec. 14, 1981-- LD Numerical Algorithms and Mathematical Dec. 13, 1984*Software for Linear Control and 1.0 Estimation Theory...THIS PAGE (Wten Date Entered) .. :..0 70 FINAL REPORT--ARO Grant DAAG29-82-K-0028,"Numerical Algorithms and Mathematical Software for Linear Control and

16. A semi-numerical algorithm for instability of compressible multilayered structures

Tang, Shan; Yang, Yang; Peng, Xiang He; Liu, Wing Kam; Huang, Xiao Xu; Elkhodary, Khalil

2015-07-01

A computational method is proposed for the analysis and prediction of instability (wrinkling or necking) of multilayered compressible plates and sheets made by metals or polymers under plane strain conditions. In previous works, a basic assumption (or a physical argument) that has been frequently made is that materials are incompressible to simplify mathematical derivations. To account for the compressibility of metals and polymers (the lower Poisson's ratio leads to the more compressible material), we propose a combined semi-numerical algorithm and finite element method for instability analysis. Our proposed algorithm is herein verified by comparing its predictions with published results in literature for thin films with polymer/metal substrates and for polymer/metal systems. The new combined method is then used to predict the effects of compressibility on instability behaviors. Results suggest potential utility for compressibility in the design of multilayered structures.

17. Numerical results for extended field method applications. [thin plates

NASA Technical Reports Server (NTRS)

Donaldson, B. K.; Chander, S.

1973-01-01

This paper presents the numerical results obtained when a new method of analysis, called the extended field method, was applied to several thin plate problems including one with non-rectangular geometry, and one problem involving both beams and a plate. The numerical results show that the quality of the single plate solutions was satisfactory for all cases except those involving a freely deflecting plate corner. The results for the beam and plate structure were satisfactory even though the structure had a freely deflecting corner.

18. Numerical Optimization Algorithms and Software for Systems Biology

SciTech Connect

Saunders, Michael

2013-02-02

The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

19. Variationally consistent discretization schemes and numerical algorithms for contact problems

Wohlmuth, Barbara

We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

20. An application of fast algorithms to numerical electromagnetic modeling

SciTech Connect

Bezvoda, V.; Segeth, K.

1987-03-01

Numerical electromagnetic modeling by the finite-difference or finite-element methods leads to a large sparse system of linear algebraic equations. Fast direct methods, requiring an order of at most q log q arithmetic operations to solve a system of q equations, cannot easily be applied to such a system. This paper describes the iterative application of a fast method, namely cyclic reduction, to the numerical solution of the Helmholtz equation with a piecewise constant imaginary coefficient of the absolute term in a plane domain. By means of numerical tests the advantages and limitations of the method compared with classical direct methods are discussed. The iterative application of the cyclic reduction method is very efficient if one can exploit a known solution of a similar (e.g., simpler) problem as the initial approximation. This makes cyclic reduction a powerful tool in solving the inverse problem by trial-and-error.

1. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

SciTech Connect

Godfrey, Brendan B.; Vay, Jean-Luc

2013-09-01

Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

2. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

NASA Technical Reports Server (NTRS)

Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

1988-01-01

A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

3. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

SciTech Connect

Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.

2010-07-01

The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

4. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

PubMed

Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

2010-07-01

The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

5. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

PubMed Central

Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

2010-01-01

The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

6. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

PubMed

Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

2016-01-01

This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution.

7. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

PubMed Central

Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

2016-01-01

This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

8. A bibliography on parallel and vector numerical algorithms

NASA Technical Reports Server (NTRS)

Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

1988-01-01

This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

9. A bibliography on parallel and vector numerical algorithms

NASA Technical Reports Server (NTRS)

Ortega, J. M.; Voigt, R. G.

1987-01-01

This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

10. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

ERIC Educational Resources Information Center

Jacquot, Raymond G.; And Others

1985-01-01

Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

11. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

He, Jianbin; Yu, Simin; Cai, Jianping

2016-12-01

Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

12. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

SciTech Connect

Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

2003-12-15

We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

13. Fourier analysis of numerical algorithms for the Maxwell equations

NASA Technical Reports Server (NTRS)

Liu, Yen

1993-01-01

The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

14. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

2008-07-01

The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

15. Numerical Results of Earth's Core Accumulation 3-D Modelling

Khachay, Yurie; Anfilogov, Vsevolod

2013-04-01

16. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

17. Numerical Simulation of Micronozzles with Comparison to Experimental Results

Thornber, B.; Chesta, E.; Gloth, O.; Brandt, R.; Schwane, R.; Perigo, D.; Smith, P.

2004-10-01

A numerical analysis of conical micronozzle flows has been conducted using the commercial software package CFD-RC FASTRAN [13]. The numerical results have been validated by comparison with direct thrust and mass flow measurements recently performed in ESTEC Propulsion Laboratory on Polyflex Space Ltd. 10mN Cold-Gas thrusters in the frame of ESA CryoSat mission. The flow is viscous dominated, with a throat Reynolds number of 5000, and the relatively large length of the nozzle causes boundary layer effects larger than usual for nozzles of this size. This paper discusses in detail the flow physics such as boundary layer growth and structure, and the effects of rarefaction. Furthermore a number of different domain sizes and exit boundary conditions are used to determine the optimum combination of computational time and accuracy.

18. A stable and efficient numerical algorithm for unconfined aquifer analysis

SciTech Connect

Keating, Elizabeth; Zyvoloski, George

2008-01-01

The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

19. A stable and efficient numerical algorithm for unconfined aquifer analysis.

PubMed

Keating, Elizabeth; Zyvoloski, George

2009-01-01

The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well.

20. Numerical linked-cluster algorithms. I. Spin systems on square, triangular, and kagomé lattices.

PubMed

Rigol, Marcos; Bryant, Tyler; Singh, Rajiv R P

2007-06-01

We discuss recently introduced numerical linked-cluster (NLC) algorithms that allow one to obtain temperature-dependent properties of quantum lattice models, in the thermodynamic limit, from exact diagonalization of finite clusters. We present studies of thermodynamic observables for spin models on square, triangular, and kagomé lattices. Results for several choices of clusters and extrapolations methods, that accelerate the convergence of NLCs, are presented. We also include a comparison of NLC results with those obtained from exact analytical expressions (where available), high-temperature expansions (HTE), exact diagonalization (ED) of finite periodic systems, and quantum Monte Carlo simulations. For many models and properties NLC results are substantially more accurate than HTE and ED.

1. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

NASA Technical Reports Server (NTRS)

Thornton, C. L.; Bierman, G. J.

1976-01-01

An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

2. Multislice algorithms revisited: solving the Schrödinger equation numerically for imaging with electrons.

PubMed

Wacker, C; Schröder, R R

2015-04-01

For a long time, the high-energy approximation was sufficient for any image simulation in electron microscopy. This changed with the advent of aberration correctors that allow high-resolution imaging at low electron energies. To deal with this fact, we present a numerical solution of the exact Schrödinger equation that is novel in the field of electron microscopy. Furthermore, we investigate systematically the advantages and problems of several multislice algorithms, especially the real-space algorithms.

3. Preliminary results from the ASF/GPS ice classification algorithm

NASA Technical Reports Server (NTRS)

Cunningham, G.; Kwok, R.; Holt, B.

1992-01-01

The European Space Agency Remote Sensing Satellite (ERS-1) satellite carried a C-band synthetic aperture radar (SAR) to study the earth's polar regions. The radar returns from sea ice can be used to infer properties of ice, including ice type. An algorithm has been developed for the Alaska SAR facility (ASF)/Geophysical Processor System (GPS) to infer ice type from the SAR observations over sea ice and open water. The algorithm utilizes look-up tables containing expected backscatter values from various ice types. An analysis has been made of two overlapping strips with 14 SAR images. The backscatter values of specific ice regions were sampled to study the backscatter characteristics of the ice in time and space. Results show both stability of the backscatter values in time and a good separation of multiyear and first-year ice signals, verifying the approach used in the classification algorithm.

4. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

Penenko, Alexey; Antokhin, Pavel

2016-11-01

The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

5. Evaluation of registration, compression and classification algorithms. Volume 1: Results

NASA Technical Reports Server (NTRS)

Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

1979-01-01

The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

6. Modeling extracellular electrical stimulation: II. Computational validation and numerical results.

PubMed

Tahayori, Bahman; Meffin, Hamish; Dokos, Socrates; Burkitt, Anthony N; Grayden, David B

2012-12-01

The validity of approximate equations describing the membrane potential under extracellular electrical stimulation (Meffin et al 2012 J. Neural Eng. 9 065005) is investigated through finite element analysis in this paper. To this end, the finite element method is used to simulate a cylindrical neurite under extracellular stimulation. Laplace's equations with appropriate boundary conditions are solved numerically in three dimensions and the results are compared to the approximate analytic solutions. Simulation results are in agreement with the approximate analytic expressions for longitudinal and transverse modes of stimulation. The range of validity of the equations describing the membrane potential for different values of stimulation and neurite parameters are presented as well. The results indicate that the analytic approach can be used to model extracellular electrical stimulation for realistic physiological parameters with a high level of accuracy.

7. On the impact of communication complexity in the design of parallel numerical algorithms

NASA Technical Reports Server (NTRS)

Gannon, D.; Vanrosendale, J.

1984-01-01

This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

8. Analysis of the distribution of pitch angles in model galactic disks - Numerical methods and algorithms

NASA Technical Reports Server (NTRS)

Russell, William S.; Roberts, William W., Jr.

1993-01-01

An automated mathematical method capable of successfully isolating the many different features in prototype and observed spiral galaxies and of accurately measuring the pitch angles and lengths of these individual features is developed. The method is applied to analyze the evolution of specific features in a prototype galaxy exhibiting flocculent spiral structure. The mathematical-computational method was separated into two components. Initially, the galaxy was partitioned into dense regions constituting features using two different methods. The results obtained using these two partitioning algorithms were very similar, from which it is inferred that no numerical biasing was evident and that capturing of the features was consistent. Standard least-squares methods underestimated the true slope of the cloud distribution and were incapable of approximating an orientation of 45 deg. The problems were overcome by introducing a superior fit least-squares method, developed with the intention of calculating true orientation rather than a regression line.

9. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

NASA Technical Reports Server (NTRS)

Povitsky, Alex; Morris, Philip J.

1999-01-01

In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

10. Extremal polynomials and methods of optimization of numerical algorithms

SciTech Connect

Lebedev, V I

2004-10-31

Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

11. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

SciTech Connect

Dong, S.

2015-02-15

We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

12. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

2015-09-01

The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

13. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

NASA Technical Reports Server (NTRS)

Powell, Richard W.

1998-01-01

This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

14. Numerical convergence and interpretation of the fuzzy c-shells clustering algorithm.

PubMed

Bezdek, J C; Hathaway, R J

1992-01-01

R. N. Dave's (1990) version of fuzzy c-shells is an iterative clustering algorithm which requires the application of Newton's method or a similar general optimization technique at each half step in any sequence of iterates for minimizing the associated objective function. An important computational question concerns the accuracy of the solution required at each half step within the overall iteration. The general convergence theory for grouped coordination minimization is applied to this question to show that numerically exact solution of the half-step subproblems in Dave's algorithm is not necessary. One iteration of Newton's method in each coordinate minimization half step yields a sequence obtained using the fuzzy c-shells algorithm with numerically exact coordinate minimization at each half step. It is shown that fuzzy c-shells generates hyperspherical prototypes to the clusters it finds for certain special cases of the measure of dissimilarity used.

15. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

2016-06-01

Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

16. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

2016-10-01

Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

17. Velocity distribution of meteoroids colliding with planets and satellites. II. Numerical results

Kholshevnikov, K. V.; Shor, V. A.

In the first part of the paper we proposed algorithm for describing velocity distribution of meteoroids colliding with planets and satellites. In the present part we show numerical characteristics of the distribution function. Namely, for each of terrestrial planets and their satellites we consider a swarm of encountering particles of asteroidal origin. They form a field of relative collisional velocities v. We consider momenta k (mathematical expectation of vk), k = -1, 1, 2, 3, 4. The data are calculated under two different assumptions: taking into account gravitation of target body or without it. The main results are presented in a series of tables each containing five numbers and several useful functions of them.

18. Variational Bayesian approximation with scale mixture prior for inverse problems: A numerical comparison between three algorithms

Gharsalli, Leila; Mohammad-Djafari, Ali; Fraysse, Aurélia; Rodet, Thomas

2013-08-01

Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.

19. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

2016-06-01

This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

20. Stochastic models and numerical algorithms for a class of regulatory gene networks.

PubMed

Fournier, Thomas; Gabriel, Jean-Pierre; Pasquier, Jerôme; Mazza, Christian; Galbete, José; Mermod, Nicolas

2009-08-01

Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.

1. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

2011-12-01

Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

2. A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients

SciTech Connect

Alex, Arne; Delft, Jan von; Kalus, Matthias; Huckleberry, Alan

2011-02-15

We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).

3. Numerical algorithms for estimation and calculation of parameters in modeling pest population dynamics and evolution of resistance.

PubMed

Shi, Mingren; Renton, Michael

2011-10-01

Computational simulation models can provide a way of understanding and predicting insect population dynamics and evolution of resistance, but the usefulness of such models depends on generating or estimating the values of key parameters. In this paper, we describe four numerical algorithms generating or estimating key parameters for simulating four different processes within such models. First, we describe a novel method to generate an offspring genotype table for one- or two-locus genetic models for simulating evolution of resistance, and how this method can be extended to create offspring genotype tables for models with more than two loci. Second, we describe how we use a generalized inverse matrix to find a least-squares solution to an over-determined linear system for estimation of parameters in probit models of kill rates. This algorithm can also be used for the estimation of parameters of Freundlich adsorption isotherms. Third, we describe a simple algorithm to randomly select initial frequencies of genotypes either without any special constraints or with some pre-selected frequencies. Also we give a simple method to calculate the "stable" Hardy-Weinberg equilibrium proportions that would result from these initial frequencies. Fourth we describe how the problem of estimating the intrinsic rate of natural increase of a population can be converted to a root-finding problem and how the bisection algorithm can then be used to find the rate. We implemented all these algorithms using MATLAB and Python code; the key statements in both codes consist of only a few commands and are given in the appendices. The results of numerical experiments are also provided to demonstrate that our algorithms are valid and efficient.

4. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

DOE PAGES

François, Marianne M.

2015-05-28

A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

5. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

2015-11-01

We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

6. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

PubMed

Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

2013-03-20

In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models.

7. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

Duran, Ahmet; Tuncel, Mehmet

2016-10-01

It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

8. Numerical nonwavefront-guided algorithm for expansion or recentration of the optical zone

Arba Mosquera, Samuel; Verma, Shwetabh

2014-08-01

Complications may arise due to the decentered ablations during refractive surgery, resulting from human or mechanical errors. Decentration may cause over-/under-corrections, with patients complaining about seeing glares and halos after the procedure. Customized wavefront-guided treatments are often used to design retreatment procedures. However, due to the limitations of wavefront sensors in precisely measuring very large aberrations, some extreme cases may suffer when retreated with wavefront-guided treatments. We propose a simple and inexpensive numerical (nonwavefront-guided) algorithm to recenter the optical zone (OZ) and to correct the refractive error with minimal tissue removal. Due to its tissue-saving capabilities, this method can benefit patients with critical residual corneal thickness. Based on the reconstruction of ablation achieved in the first surgical procedure, we calculate a target ablation (by manipulating the achieved OZ) with adequate centration and an OZ sufficient enough to envelope the achieved ablation. The net ablation map for the retreatment procedure is calculated from the achieved and target ablations and is suitable to expand, recenter, and modulate the lower-order refractive components in a retreatment procedure. The results of our simulations suggest minimal tissue removal with OZ centration and expansion. Enlarging the OZ implies correcting spherical aberrations, whereas inducing centration implies correcting coma. This method shows the potential to improve visual outcomes in extreme cases of retreatment, possibly serving as an uncomplicated and inexpensive alternative to wavefront-guided retreatments.

9. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

2016-10-01

We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

NASA Technical Reports Server (NTRS)

Hafez, Mohammed; Dacles, Jennifer

1989-01-01

The numerical analysis of the incompressible Navier-Stokes equations are becoming important tools in the understanding of some fluid flow problems which are encountered in research as well as in industry. With the advent of the supercomputers, more realistic problems can be studied with a wider choice of numerical algorithms. An alternative formulation is presented for viscous incompressible flows. The incompressible Navier-Stokes equations are cast in a velocity/vorticity formulation. This formulation consists of solving the Poisson equations for the velocity components and the vorticity transport equation. Two numerical algorithms for the steady two-dimensional laminar flows are presented. The first method is based on the actual partial differential equations. This uses a finite-difference approximation of the governing equations on a staggered grid. The second method uses a finite element discretization with the vorticity transport equation approximated using a Galerkin approximation and the Poisson equations are obtained using a least squares method. The equations are solved efficiently using Newton's method and a banded direct matrix solver (LINPACK). The method is extended to steady three-dimensional laminar flows and applied to a cubic driven cavity using finite difference schemes and a staggered grid arrangement on a Cartesian mesh. The equations are solved iteratively using a plane zebra relaxation scheme. Currently, a two-dimensional, unsteady algorithm is being developed using a generalized coordinate system. The equations are discretized using a finite-volume approach. This work will then be extended to three-dimensional flows.

11. Adaptively resizing populations: Algorithm, analysis, and first results

NASA Technical Reports Server (NTRS)

Smith, Robert E.; Smuda, Ellen

1993-01-01

Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

12. Busted Butte: Achieving the Objectives and Numerical Modeling Results

SciTech Connect

W.E. Soll; M. Kearney; P. Stauffer; P. Tseng; H.J. Turin; Z. Lu

2002-10-07

The Unsaturated Zone Transport Test (UZTT) at Busted Butte is a mesoscale field/laboratory/modeling investigation designed to address uncertainties associated with flow and transport in the UZ site-process models for Yucca Mountain. The UZTT test facility is located approximately 8 km southeast of the potential Yucca Mountain repository area. The UZTT was designed in two phases, to address five specific objectives in the UZ: the effect of heterogeneities, flow and transport (F&T) behavior at permeability contrast boundaries, migration of colloids , transport models of sorbing tracers, and scaling issues in moving from laboratory scale to field scale. Phase 1A was designed to assess the influence of permeability contrast boundaries in the hydrologic Calico Hills. Visualization of fluorescein movement , mineback rock analyses, and comparison with numerical models demonstrated that F&T are capillary dominated with permeability contrast boundaries distorting the capillary flow. Phase 1B was designed to assess the influence of fractures on F&T and colloid movement. The injector in Phase 1B was located at a fracture, while the collector, 30 cm below, was placed at what was assumed to be the same fracture. Numerical simulations of nonreactive (Br) and reactive (Li) tracers show the experimental data are best explained by a combination of molecular diffusion and advective flux. For Phase 2, a numerical model with homogeneous unit descriptions was able to qualitatively capture the general characteristics of the system. Numerical simulations and field observations revealed a capillary dominated flow field. Although the tracers showed heterogeneity in the test block, simulation using heterogeneous fields did not significantly improve the data fit over homogeneous field simulations. In terms of scaling, simulations of field tracer data indicate a hydraulic conductivity two orders of magnitude higher than measured in the laboratory. Simulations of Li, a weakly sorbing tracer

13. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

Degtyarev, Alexander; Khramushin, Vasily

2016-02-01

The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

14. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

2017-01-01

In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

15. Numerical calculations of high-altitude differential charging: Preliminary results

NASA Technical Reports Server (NTRS)

Laframboise, J. G.; Godard, R.; Prokopenko, S. M. L.

1979-01-01

A two dimensional simulation program was constructed in order to obtain theoretical predictions of floating potential distributions on geostationary spacecraft. The geometry was infinite-cylindrical with angle dependence. Effects of finite spacecraft length on sheath potential profiles can be included in an approximate way. The program can treat either steady-state conditions or slowly time-varying situations, involving external time scales much larger than particle transit times. Approximate, locally dependent expressions were used to provide space charge, density profiles, but numerical orbit-following is used to calculate surface currents. Ambient velocity distributions were assumed to be isotropic, beam-like, or some superposition of these.

16. Numerical algorithms for computations of feedback laws arising in control of flexible systems

NASA Technical Reports Server (NTRS)

Lasiecka, Irena

1989-01-01

Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.

17. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

PubMed

Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

2016-12-01

Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

18. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

SciTech Connect

Luz, Fernando H. P.; Mendes, Tereza

2010-11-12

Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

19. Centroid-Based Document Classification Algorithms: Analysis & Experimental Results

DTIC Science & Technology

2000-03-06

in terms of zero-one loss (misclassification rate ). Linear Classifiers Linear classifiers [31] are a family of text categorization learning...training set and and a test set. The error rates of algorithms A and B on the test set are recorded. Let p (i)A be the error rate of algorithm A and p(i...B be the error rate of algorithm B during trial i . Then Student’s t test can be computed using the statistic: t = p̄ √ n √ ∑n i=1(p(i)− p̄)2 n−1

20. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

PubMed Central

Retsky, Michael

2009-01-01

A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

1. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

NASA Technical Reports Server (NTRS)

Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

1990-01-01

Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

2. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

Rajashekararadhya, S. V.; Ranjan, P. Vanaja

India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

3. A numerical algorithm with preference statements to evaluate the performance of scientists.

PubMed

Ricker, Martin

Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.

4. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

Carrano, Charles S.; Rino, Charles L.

2016-06-01

We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

5. Aeolian Simulations: A Comparison of Numerical and Experimental Results

Mathews, O.; Burr, D. M.; Bridges, N. T.; Lyne, J. E.; Marshall, J. R.; Greeley, R.; White, B. R.; Hills, J.; Smith, K.; Prissel, T. C.; Aliaga-Caro, J. F.

2010-12-01

Aeolian processes are a major geomorphic agent on solid planetary bodies with atmospheres (Earth, Mars, Venus, and Titan). This paper describes preliminary efforts to model aeolian saltation using computational fluid dynamics (CFD) and to compare the results with those obtained in wind tunnel testing conducted in the Planetary Aeolian Laboratory at NASA Ames Research Center at ambient pressure. The end goal of the project is to develop an experimentally validated CFD approach for modeling aeolian sediment transport on Titan and other planetary bodies. The MARSWIT open-circuit tunnel in this work was specifically designed for atmospheric boundary layer studies. It is a variable-speed, continuous flow tunnel with a test section 1.0 m by 1.2 m in size; the tunnel is able to operate at pressures from 10 millibar to one atmosphere. Flow trips near the tunnel inlet ensure a fully developed, turbulent boundary layer in the test section. Wind speed and axial velocity profiles can be measured with a traversing pitot tube. In this study, sieved walnut shell particles (Greeley et al. 1976) with a density of ~1.1 g/cm3 were used to correlate the low gravity conditions and low sediment density on a body of interest to that of Earth. This sediment was placed in the tunnel, and the freestream airspeed raised to 5.4 m/s. A Phantom v12 camera imaged the resulting particle motion at 1000 frames per second, which was analyzed with ImageJ open-source software (Fig. 1). Airflow in the tunnel was modeled with FLUENT, a commercial CFD program. The turbulent scheme used in FLUENT to obtain closed-form solutions to the Navier-Stokes equations was a 1st Order, k-epsilon model. These methods produced computational velocity profiles that agree with experimental data to within 5-10%. Once modeling of the flow field had been achieved, a Euler-Lagrangian scheme was employed, treating the particles as spheres and tracking each particle at its center. The particles are assumed to interact with

6. Sediment Pathways Across Trench Slopes: Results From Numerical Modeling

Cormier, M. H.; Seeber, L.; McHugh, C. M.; Fujiwara, T.; Kanamatsu, T.; King, J. W.

2015-12-01

Until the 2011 Mw9.0 Tohoku earthquake, the role of earthquakes as agents of sediment dispersal and deposition at erosional trenches was largely under-appreciated. A series of cruises carried out after the 2011 event has revealed a variety of unsuspected sediment transport mechanisms, such as tsunami-triggered sheet turbidites, suggesting that great earthquakes may in fact be important agents for dispersing sediments across trench slopes. To complement these observational data, we have modeled the pathways of sediments across the trench slope based on bathymetric grids. Our approach assumes that transport direction is controlled by slope azimuth only, and ignores obstacles smaller than 0.6-1 km; these constraints are meant to approximate the behavior of turbidites. Results indicate that (1) most pathways issued from the upper slope terminate near the top of the small frontal wedge, and thus do not reach the trench axis; (2) in turn, sediments transported to the trench axis are likely derived from the small frontal wedge or from the subducting Pacific plate. These results are consistent with the stratigraphy imaged in seismic profiles, which reveals that the slope apron does not extend as far as the frontal wedge, and that the thickness of sediments at the trench axis is similar to that of the incoming Pacific plate. We further applied this modeling technique to the Cascadia, Nankai, Middle-America, and Sumatra trenches. Where well-defined canyons carve the trench slopes, sediments from the upper slope may routinely reach the trench axis (e.g., off Costa Rica and Cascadia). In turn, slope basins that are isolated from the canyons drainage systems must mainly accumulate locally-derived sediments. Therefore, their turbiditic infill may be diagnostic of seismic activity only - and not from storm or flood activity. If correct, this would make isolated slope basins ideal targets for paleoseismological investigation.

7. The weirdest SDSS galaxies: results from an outlier detection algorithm

Baron, Dalya; Poznanski, Dovi

2017-03-01

How can we discover objects we did not know existed within the large data sets that now abound in astronomy? We present an outlier detection algorithm that we developed, based on an unsupervised Random Forest. We test the algorithm on more than two million galaxy spectra from the Sloan Digital Sky Survey and examine the 400 galaxies with the highest outlier score. We find objects which have extreme emission line ratios and abnormally strong absorption lines, objects with unusual continua, including extremely reddened galaxies. We find galaxy-galaxy gravitational lenses, double-peaked emission line galaxies and close galaxy pairs. We find galaxies with high ionization lines, galaxies that host supernovae and galaxies with unusual gas kinematics. Only a fraction of the outliers we find were reported by previous studies that used specific and tailored algorithms to find a single class of unusual objects. Our algorithm is general and detects all of these classes, and many more, regardless of what makes them peculiar. It can be executed on imaging, time series and other spectroscopic data, operates well with thousands of features, is not sensitive to missing values and is easily parallelizable.

8. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

Zhang, Lisha

We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

9. Numerical algorithms for highly oscillatory dynamic system based on commutator-free method

Li, Wencheng; Deng, Zichen; Zhang, Suying

2007-04-01

In the present paper, an efficiently improved modified Magnus integrator algorithm based on commutator-free method is proposed for the second-order dynamic systems with time-dependent high frequencies. Firstly, the second-order dynamic systems are transferred to the frame of reference by introducing new variable so that highly oscillatory behaviour inherited from the entries. Then the modified Magnus integrator method based on local linearization is appropriately designed for solving the above new form. And some optimized strategies for reducing the number of function evaluations and matrix operations are also suggested. Finally, several numerical examples for highly oscillatory dynamic systems, such as Airy equation, Bessel equation, Mathieu equation, are presented to demonstrate the validity and effectiveness of the proposed method.

10. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

NASA Technical Reports Server (NTRS)

Banks, H. T.; Ito, K.

1986-01-01

A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

11. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

Li, Yiming

2007-12-01

This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

12. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

2015-11-01

A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

13. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

NASA Technical Reports Server (NTRS)

Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

1987-01-01

The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

14. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

NASA Technical Reports Server (NTRS)

Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

1987-01-01

The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

15. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

NASA Technical Reports Server (NTRS)

Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

1988-01-01

The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

16. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

NASA Technical Reports Server (NTRS)

Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

1988-01-01

The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

17. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

SciTech Connect

Becker, R; Stolken, J; Jannetti, C; Bassani, J

2003-10-16

Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

18. Artificial algae algorithm with multi-light source for numerical optimization and applications.

PubMed

Uymaz, Sait Ali; Tezel, Gulay; Yel, Esra

2015-12-01

Artificial algae algorithm (AAA), which is one of the recently developed bio-inspired optimization algorithms, has been introduced by inspiration from living behaviors of microalgae. In AAA, the modification of the algal colonies, i.e. exploration and exploitation is provided with a helical movement. In this study, AAA was modified by implementing multi-light source movement and artificial algae algorithm with multi-light source (AAAML) version was established. In this new version, we propose the selection of a different light source for each dimension that is modified with the helical movement for stronger balance between exploration and exploitation. These light sources have been selected by tournament method and each light source are different from each other. This gives different solutions in the search space. The best of these three light sources provides orientation to the better region of search space. Furthermore, the diversity in the source space is obtained with the worst light source. In addition, the other light source improves the balance. To indicate the performance of AAA with new proposed operators (AAAML), experiments were performed on two different sets. Firstly, the performance of AAA and AAAML was evaluated on the IEEE-CEC'13 benchmark set. The second set was real-world optimization problems used in the IEEE-CEC'11. To verify the effectiveness and efficiency of the proposed algorithm, the results were compared with other state-of-the-art hybrid and modified algorithms. Experimental results showed that the multi-light source movement (MLS) increases the success of the AAA.

19. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

Kitaura, F. S.; Enßlin, T. A.

2008-09-01

We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

20. Numerical study of a finite volume scheme for incompressible Navier-Stokes equations based on SIMPLE-family algorithms

Alahyane, M.; Hakim, A.; Raghay, S.

2017-01-01

In this work, we present a numerical study of a finite volume scheme based on SIMPLE algorithm for incompressible Navier-Stokes problem. However, this algorithm still not applicable to a large category of problems this could be understood from its stability and convergence, which depends strongly on the parameter of relaxation, in some cases this algorithm could have an unexpected behavior. Therefore, in our work we focus on this particular point to overcome this respected choice of relaxation parameter and to find a sufficient condition for the convergence of the algorithm in general cases. This will be followed by numerical applications in image processing variety of fluid flow problems described by incompressible Navier-Stokes equations.

1. An efficient numerical algorithm for computing densely distributed positive interior transmission eigenvalues

Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan

2017-03-01

We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi–Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.

2. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

Harmon, Michael; Gamba, Irene M.; Ren, Kui

2016-12-01

This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

3. Scanning of wind turbine upwind conditions: numerical algorithm and first applications

Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.

2014-11-01

Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.

4. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

NASA Technical Reports Server (NTRS)

Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

1988-01-01

The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

5. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

6. A parallel hybrid numerical algorithm for simulating gas flow and gas discharge of an atmospheric-pressure plasma jet

Lin, K.-M.; Hu, M.-H.; Hung, C.-T.; Wu, J.-S.; Hwang, F.-N.; Chen, Y.-S.; Cheng, G.

2012-12-01

Development of a hybrid numerical algorithm which couples weakly with the gas flow model (GFM) and the plasma fluid model (PFM) for simulating an atmospheric-pressure plasma jet (APPJ) and its acceleration by two approaches is presented. The weak coupling between gas flow and discharge is introduced by transferring between the results obtained from the steady-state solution of the GFM and cycle-averaged solution of the PFM respectively. Approaches of reducing the overall runtime include parallel computing of the GFM and the PFM solvers, and employing a temporal multi-scale method (TMSM) for PFM. Parallel computing of both solvers is realized using the domain decomposition method with the message passing interface (MPI) on distributed-memory machines. The TMSM considers only chemical reactions by ignoring the transport terms when integrating temporally the continuity equations of heavy species at each time step, and then the transport terms are restored only at an interval of time marching steps. The total reduction of runtime is 47% by applying the TMSM to the APPJ example presented in this study. Application of the proposed hybrid algorithm is demonstrated by simulating a parallel-plate helium APPJ impinging onto a substrate, which the cycle-averaged properties of the 200th cycle are presented. The distribution patterns of species densities are strongly correlated by the background gas flow pattern, which shows that consideration of gas flow in APPJ simulations is critical.

7. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

ERIC Educational Resources Information Center

Gonzalez-Vega, Laureano

1999-01-01

Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

8. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

NASA Technical Reports Server (NTRS)

Carter, Richard G.

1989-01-01

For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

9. Some Numerical Results of Multipoints Bomndary Value Problems Arise in Environmental Protection

Pop, Daniel N.

2016-12-01

In this paper, we investigate two problems arise in pollutant transport in rivers, and we give some numerical results to approximate this solutions. We determined the approximate solutions using two numerical methods: 1. B-splines combined with Runge-Kutta methods, 2. BVP4C solver of MATLAB and then we compare the run-times.

10. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

NASA Technical Reports Server (NTRS)

Spratlin, Kenneth Milton

1987-01-01

An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

11. LMS learning algorithms: misconceptions and new results on converence.

PubMed

Wang, Z Q; Manry, M T; Schiano, J L

2000-01-01

The Widrow-Hoff delta rule is one of the most popular rules used in training neural networks. It was originally proposed for the ADALINE, but has been successfully applied to a few nonlinear neural networks as well. Despite its popularity, there exist a few misconceptions on its convergence properties. In this paper we consider repetitive learning (i.e., a fixed set of samples are used for training) and provide an in-depth analysis in the least mean square (LMS) framework. Our main result is that contrary to common belief, the nonbatch Widrow-Hoff rule does not converge in general. It converges only to a limit cycle.

12. Numerical results using the conforming VEM for the convection-diffusion-reaction equation with variable coefficients.

SciTech Connect

Manzini, Gianmarco; Cangiani, Andrea; Sutton, Oliver

2014-10-02

This document presents the results of a set of preliminary numerical experiments using several possible conforming virtual element approximations of the convection-reaction-diffusion equation with variable coefficients.

13. An efficient algorithm for numerical computations of continuous densities of states

Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

2016-06-01

In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

14. The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.

PubMed

Comer, M L; Delp, E J

2000-01-01

In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

15. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

NASA Technical Reports Server (NTRS)

Weir, Kent A.; Wells, Eugene M.

1990-01-01

The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

16. Comparison of results of experimental research with numerical calculations of a model one-sided seal

Joachimiak, Damian; Krzyślak, Piotr

2015-06-01

Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

17. Exact and numerical results for a dimerized coupled spin- 1/2 chain

PubMed

Martins; Nienhuis

2000-12-04

We establish exact results for coupled spin-1/2 chains for special values of the four-spin interaction V and dimerization parameter delta. The first exact result is at delta = 1/2 and V = -2. Because we find a very small but finite gap in this dimerized chain, this can serve as a very strong test case for numerical and approximate analytical techniques. The second result is for the homogeneous chain with V = -4 and gives evidence that the system has a spontaneously dimerized ground state. Numerical diagonalization and bosonization techniques indicate that the interplay between dimerization and interaction could result in gapless phases in the regime 0

18. Role of numerical scheme choice on the results of mathematical modeling of combustion and detonation

Yakovenko, I. S.; Kiverin, A. D.; Pinevich, S. G.; Ivanov, M. F.

2016-11-01

The present study discusses capabilities of dissipation-free CABARET numerical method application to unsteady reactive gasdynamic flows modeling. In framework of present research the method was adopted for reactive flows governed by real gas equation of state and applied for several typical problems of unsteady gas dynamics and combustion modeling such as ignition and detonation initiation by localized energy sources. Solutions were thoroughly analyzed and compared with that derived by using of the modified Euler-Lagrange method of “coarse” particles. Obtained results allowed us to distinguish range of phenomena where artificial effects of numerical approach may counterfeit their physical nature and to develop guidelines for numerical approach selection appropriate for unsteady reactive gasdynamic flows numerical modeling.

19. Comparison of experimental results with numerical simulations for pulsed thermographic NDE

Sripragash, Letchuman; Sundaresan, Mannur

2017-02-01

This paper examines pulse thermographic nondestructive evaluation of flat bottom holes of isotropic materials. Different combinations of defect diameters and depths are considered. Thermographic Signal Reconstruction (TSR) method is used to analyze these results. In addition, a new normalization procedure is used to remove the dependence of thermographic results on the material properties and instrumentation settings during these experiments. Hence the normalized results depend only on the geometry of the specimen and the defects. These thermographic NDE procedures were also simulated using finite element technique for a variety of defect configurations. The data obtained from numerical simulations were also processed using the normalization scheme. Excellent agreement was seen between the results obtained from experiments and numerical simulations. Therefore, the scheme is extended to introduce a correlation technique by which numerical simulations are used to quantify the defect parameters.

20. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction

PubMed Central

Dehghani, Hamid; Eames, Matthew E.; Yalavarthy, Phaneendra K.; Davis, Scott C.; Srinivasan, Subhadra; Carpenter, Colin M.; Pogue, Brian W.; Paulsen, Keith D.

2009-01-01

SUMMARY Diffuse optical tomography, also known as near infrared tomography, has been under investigation, for non-invasive functional imaging of tissue, specifically for the detection and characterization of breast cancer or other soft tissue lesions. Much work has been carried out for accurate modeling and image reconstruction from clinical data. NIRFAST, a modeling and image reconstruction package has been developed, which is capable of single wavelength and multi-wavelength optical or functional imaging from measured data. The theory behind the modeling techniques as well as the image reconstruction algorithms is presented here, and 2D and 3D examples are presented to demonstrate its capabilities. The results show that 3D modeling can be combined with measured data from multiple wavelengths to reconstruct chromophore concentrations within the tissue. Additionally it is possible to recover scattering spectra, resulting from the dominant Mie-type scatter present in tissue. Overall, this paper gives a comprehensive over view of the modeling techniques used in diffuse optical tomographic imaging, in the context of NIRFAST software package. PMID:20182646

1. The 3D Kasteleyn transition in dipolar spin ice: a numerical study with the conserved monopoles algorithm.

PubMed

Baez, M L; Borzi, R A

2017-02-08

We study the three-dimensional Kasteleyn transition in both nearest neighbours and dipolar spin ice models using an algorithm that conserves the number of excitations. We first limit the interactions range to nearest neighbours to test the method in the presence of a field applied along [Formula: see text], and then focus on the dipolar spin ice model. The effect of dipolar interactions, which is known to be greatly self screened at zero field, is particularly strong near full polarization. It shifts the Kasteleyn transition to lower temperatures, which decreases  ≈0.4 K for the parameters corresponding to the best known spin ice materials, [Formula: see text] and [Formula: see text]. This shift implies effective dipolar fields as big as 0.05 T opposing the applied field, and thus favouring the creation of 'strings' of reversed spins. We compare the reduction in the transition temperature with results in previous experiments, and study the phenomenon quantitatively using a simple molecular field approach. Finally, we relate the presence of the effective residual field to the appearance of string-ordered phases at low fields and temperatures, and we check numerically that for fields applied along [Formula: see text] there are only three different stable phases at zero temperature.

2. The 3D Kasteleyn transition in dipolar spin ice: a numerical study with the conserved monopoles algorithm

Baez, M. L.; Borzi, R. A.

2017-02-01

We study the three-dimensional Kasteleyn transition in both nearest neighbours and dipolar spin ice models using an algorithm that conserves the number of excitations. We first limit the interactions range to nearest neighbours to test the method in the presence of a field applied along ≤ft[1 0 0\\right] , and then focus on the dipolar spin ice model. The effect of dipolar interactions, which is known to be greatly self screened at zero field, is particularly strong near full polarization. It shifts the Kasteleyn transition to lower temperatures, which decreases  ≈0.4 K for the parameters corresponding to the best known spin ice materials, \\text{D}{{\\text{y}}2}\\text{T}{{\\text{i}}2}{{\\text{O}}7} and \\text{H}{{\\text{o}}2}\\text{T}{{\\text{i}}2}{{\\text{O}}7} . This shift implies effective dipolar fields as big as 0.05 T opposing the applied field, and thus favouring the creation of ‘strings’ of reversed spins. We compare the reduction in the transition temperature with results in previous experiments, and study the phenomenon quantitatively using a simple molecular field approach. Finally, we relate the presence of the effective residual field to the appearance of string-ordered phases at low fields and temperatures, and we check numerically that for fields applied along ≤ft[1 0 0\\right] there are only three different stable phases at zero temperature.

3. Numerical simulation study of the dynamical behavior of the Niedermayer algorithm

Girardi, D.; Branco, N. S.

2010-04-01

We calculate the dynamic critical exponent for the Niedermayer algorithm applied to the two-dimensional Ising and XY models, for various values of the free parameter E0. For E0 = - 1 we regain the Metropolis algorithm and for E0 = 1 we regain the Wolff algorithm. For - 1 < E0 < 1, we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size \\widetilde {L} , which depends on E0. For L\\gt \\widetilde {L} , the Niedermayer algorithm is equivalent to the Metropolis one, i.e., they have the same dynamic exponent. For E0 > 1, the autocorrelation time is always greater than for E0 = 1 (Wolff) and, more important, it also grows faster than a power of L. Therefore, we show that the best choice of cluster algorithm is the Wolff one, when comparing against the Niedermayer generalization. We also obtain the dynamic behavior of the Wolff algorithm: although not conclusively, we propose a scaling law for the dependence of the autocorrelation time on L.

4. Image restoration by the method of convex projections: part 2 applications and numerical results.

PubMed

Sezan, M I; Stark, H

1982-01-01

The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

5. Nonlinear instability and chaos in plasma wave-wave interactions. II. Numerical methods and results

SciTech Connect

Kueny, C.S.; Morrison, P.J.

1995-05-01

In Part I of this work and Physics of Plasmas, June 1995, the behavior of linearly stable, integrable systems of waves in a simple plasma model was described using a Hamiltonian formulation. It was shown that explosive instability arises from nonlinear coupling between modes of positive and negative energy, with well-defined threshold amplitudes depending on the physical parameters. In this concluding paper, the nonintegrable case is treated numerically. Several sets of waves are considered, comprising systems of two and three degrees of freedom. The time evolution is modelled with an explicit symplectic integration algorithm derived using Lie algebraic methods. When initial wave amplitudes are large enough to support two-wave decay interactions, strongly chaotic motion destroys the separatrix bounding the stable region for explosive triplets. Phase space orbits then experience diffusive growth to amplitudes that are sufficient for explosive instability, thus effectively reducing the threshold amplitude. For initial amplitudes too small to drive decay instability, small perturbations might still grow to arbitrary size via Arnold diffusion. Numerical experiments do not show diffusion in this case, although the actual diffusion rate is probably underestimated due to the simplicity of the model.

6. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

PubMed

Bor, E; Turduev, M; Kurt, H

2016-08-01

Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

7. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

PubMed Central

Bor, E.; Turduev, M.; Kurt, H.

2016-01-01

Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

8. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

Bor, E.; Turduev, M.; Kurt, H.

2016-08-01

Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

9. Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration

NASA Technical Reports Server (NTRS)

Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.

1987-01-01

In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.

10. Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration

NASA Technical Reports Server (NTRS)

Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.

1986-01-01

In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undersirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.

11. Real-space, mean-field algorithm to numerically calculate long-range interactions

2016-02-01

Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation.

12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

PubMed Central

Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

2016-01-01

Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

13. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

PubMed

Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

2016-01-01

Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

14. Exploring vortex dynamics in the presence of dissipation: Analytical and numerical results

Yan, D.; Carretero-González, R.; Frantzeskakis, D. J.; Kevrekidis, P. G.; Proukakis, N. P.; Spirn, D.

2014-04-01

In this paper, we examine the dynamical properties of vortices in atomic Bose-Einstein condensates in the presence of phenomenological dissipation, used as a basic model for the effect of finite temperatures. In the context of this so-called dissipative Gross-Pitaevskii model, we derive analytical results for the motion of single vortices and, importantly, for vortex dipoles, which have become very relevant experimentally. Our analytical results are shown to compare favorably to the full numerical solution of the dissipative Gross-Pitaevskii equation where appropriate. We also present results on the stability of vortices and vortex dipoles, revealing good agreement between numerical and analytical results for the internal excitation eigenfrequencies, which extends even beyond the regime of validity of this equation for cold atoms.

15. Improving the trust in results of numerical simulations and scientific data analytics

SciTech Connect

Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan

2015-04-30

This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general

16. Research on Numerical Algorithms for the Three Dimensional Navier-Stokes Equations. I. Accuracy, Convergence & Efficiency.

DTIC Science & Technology

1979-09-01

ithm for Computational Fluid Dynamics," Ph.D. Dissertation, Univ. of Tennessee, Report ESM 78-1, 1978. 18. Thames, F. C., Thompson , J . F ., and Mastin...C. W., "Numerical Solution of the Navier-Stokes Equations for Arbitrary Two-Dimensional Air- foils," NASA SP-347, 1975. 19. Thompson , J . F ., Thames...Number of Arbitrary Two-Dimensional Bodies," NASA CR-2729, 1976. 20. Thames, F. C., Thompson , J . F ., Mastin, C. W., and Walker, R. L., "Numerical

17. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

NASA Technical Reports Server (NTRS)

Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

1990-01-01

Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

18. Repair, Evaluation, Maintenance, and Rehabilitation Research Program: Explicit Numerical Algorithm for Modeling Incompressible Approach Flow

DTIC Science & Technology

1989-03-01

by Colorado State University, Fort Collins, CO, for US Army Engineer Waterways Experiment Station, Vicksburg, MS. Thompson , J . F . 1983 (Mar). "A...Waterways Experiment Station, Vicksburg, MS. Thompson , J . F ., and Bernard, R. S. 1985 (Aug). "WESSEL: Code for Numerical Simulation of Two-Dimensional Time

19. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

EPA Science Inventory

## Abstract

A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

20. Parametric Evaluation of Absorption Losses and Comparison of Numerical Results to Boeing 707 Aircraft Experimental HIRF Results

Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.

A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.

1. Impulse propagation over a complex site: a comparison of experimental results and numerical predictions.

PubMed

Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck

2014-03-01

Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement.

2. Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results

PubMed Central

Di Persio, Luca; Marchesan, Michele

2014-01-01

In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest. PMID:27437500

3. Landau-Zener transitions in a dissipative environment: numerically exact results.

PubMed

Nalbach, P; Thorwart, M

2009-11-27

We study Landau-Zener transitions in a dissipative environment by means of the numerically exact quasiadiabatic propagator path integral. It allows to cover the full range of the involved parameters. We discover a nonmonotonic dependence of the transition probability on the sweep velocity which is explained in terms of a simple phenomenological model. This feature, not captured by perturbative approaches, results from a nontrivial competition between relaxation and the external sweep.

4. Macroscopic laws for immiscible two-phase flow in porous media: Results From numerical experiments

Rothman, Daniel H.

1990-06-01

Flow through porous media may be described at either of two length scales. At the scale of a single pore, fluids flow according to the Navier-Stokes equations and the appropriate boundary conditions. At a larger, volume-averaged scale, the flow is usually thought to obey a linear Darcy law relating flow rates to pressure gradients and body forces via phenomenological permeability coefficients. Aside from the value of the permeability coefficient, the slow flow of a single fluid in a porous medium is well-understood within this framework. The situation is considerably different, however, for the simultaneous flow of two or more fluids: not only are the phenomenological coefficients poorly understood, but the form of the macroscopic laws themselves is subject to question. I describe a numerical study of immiscible two-phase flow in an idealized two-dimensional porous medium constructed at the pore scale. Results show that the macroscopic flow is a nonlinear function of the applied forces for sufficiently low levels of forcing, but linear thereafter. The crossover, which is not predicted by conventional models, occurs when viscous forces begin to dominate capillary forces; i.e., at a sufficiently high capillary number. In the linear regime, the flow may be described by the linear phenomenological law ui = ΣjLijfj, where the flow rate ui of the ith fluid is related to the force fj applied to the jth fluid by the matrix of phenomenological coefficients Lij which depends on the relative concentrations of the two fluids. The diagonal terms are proportional to quantities commonly referred to as "relative permeabilities." The cross terms represent viscous coupling between the two fluids; they are conventionally assumed to be negligible and require special experimental procedures to observe in a laboratory. In contrast, in this numerical study the cross terms are straightforward to measure and are found to be of significant size. The cross terms are additionally observed to

5. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

NASA Technical Reports Server (NTRS)

Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

1991-01-01

The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

6. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

PubMed

Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

2014-09-01

Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.

7. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

PubMed Central

Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

2014-01-01

Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

8. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

PubMed Central

Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

2015-01-01

Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

9. Numerical results of the shape optimization problem for the insulation barrier

Salač, Petr

2016-12-01

The contribution deals with the numerical results for the shape optimization problem of the system mould, glass piece, plunger, insulation barrier and plunger cavity used in glass forming industry, which was formulated in details at AMEE'15. We used the software FreeFem++ to compute the numerical example for the real vase made from lead crystal glassware of the height 267 [mm] and of the mass 1, 55 [kg]. The plunger and the mould were made from steal, the insulation barrier was made from Murpec with the coefficient of thermal conductivity k = 2, 5 [W/m.K] and the coefficient of heat-transfer between the mould and the environment was chosen to be α = 14 [W/m2.K]. The cooling was implemented by the volume V = 10 [l/min] of water with the temperature 15°C at the entrance and the temperature 100°C at the exit. The results of the numerical optimization to required target temperature 800°C of the outward plunger surface together with the distribution of temperatures on the interface between the plunger and heat source before and after the optimization process are presented.

10. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

2016-10-01

In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

11. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

SciTech Connect

Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej

2010-03-09

Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm{sup 2}, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between

12. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

Kujawska, Tamara; Wójcik, Janusz; Nowicki, Andrzej

2010-03-01

Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm2, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between the

13. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

SciTech Connect

Seifert, Carolyn E.; He, Zhong

2005-10-01

For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

14. The design and results of an algorithm for intelligent ground vehicles

Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

2010-01-01

This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

15. Electrostatic modes in dense dusty plasmas with high fugacity: Numerical results

Rao, N. N.

2000-08-01

The existence of ultra low-frequency wave modes in dusty plasmas has been investigated over a wide range of dust fugacity [defined by f≡4πnd0λD2R, where nd0 is the dust number density, λD is the plasma Debye length, and R is the grain size (radius)] and the grain charging frequency (ω1) by numerically solving the dispersion relation obtained from the kinetic (Vlasov) theory. A detailed comparison between the numerical and the analytical results applicable for the tenuous (low fugacity, f≪1), the dilute (medium fugacity, f˜1), and the dense (high fugacity, f≫1) regimes has been carried out. In the long wavelength limit and for frequencies ω≪ω1, the dispersion curves obtained from the numerical solutions of the real as well as the complex (kinetic) dispersion relations agree, both qualitatively and quantitatively, with the analytical expressions derived from the fluid and the kinetic theories, and are thus identified with the ultra low-frequency electrostatic dust modes, namely, the dust-acoustic wave (DAW), the dust charge-density wave (DCDW) and the dust-Coulomb wave (DCW) discussed earlier [N. N. Rao, Phys. Plasmas 6, 4414 (1999); 7, 795 (2000)]. In particular, the analytical scaling between the phase speeds of the DCWs and the DAWs predicted from theoretical considerations, namely, (ω/k)DCW=(ω/k)DAW/√fδ (where δ is the ratio of the charging frequencies) is in excellent agreement with the numerical results. A simple physical picture of the DCWs has been proposed by defining an effective pressure called "Coulomb pressure" as PC≡nd0qd02/R, where qd0 is the grain surface charge. Accordingly, the DCW dispersion relation is given, in the lowest order, by (ω/k)DCW=√PC/ρdδ , where ρd≡nd0md is the dust mass density. Thus, the DCWs which are driven by the Coulomb pressure can be considered as the electrostatic analogue of the hydromagnetic (Alfvén or magnetoacoustic) waves which are driven by the magnetic field pressure. For the frequency

16. Network model to study physiological processes of hypobaric decompression sickness: New numerical results

Zueco, Joaquín; López-González, Luis María

2016-04-01

We have studied decompression processes when pressure changes that take place, in blood and tissues using a technical numerical based in electrical analogy of the parameters that involved in the problem. The particular problem analyzed is the behavior dynamics of the extravascular bubbles formed in the intercellular cavities of a hypothetical tissue undergoing decompression. Numerical solutions are given for a system of equations to simulate gas exchanges of bubbles after decompression, with particular attention paid to the effect of bubble size, nitrogen tension, nitrogen diffusivity in the intercellular fluid and in the tissue cell layer in a radial direction, nitrogen solubility, ambient pressure and specific blood flow through the tissue over the different molar diffusion fluxes of nitrogen per time unit (through the bubble surface, between the intercellular fluid layer and blood and between the intercellular fluid layer and the tissue cell layer). The system of nonlinear equations is solved using the Network Simulation Method, where the electric analogy is applied to convert these equations into a network-electrical model, and a computer code (electric circuit simulator, Pspice). In this paper, numerical results new (together to a network model improved with interdisciplinary electrical analogies) are provided.

17. Algorithm for direct numerical simulation of emulsion flow through a granular material

Zinchenko, Alexander Z.; Davis, Robert H.

2008-08-01

A multipole-accelerated 3D boundary-integral algorithm capable of modelling an emulsion flow through a granular material by direct multiparticle-multidrop simulations in a periodic box is developed and tested. The particles form a random arrangement at high volume fraction rigidly held in space (including the case of an equilibrium packing in mechanical contact). Deformable drops (with non-deformed diameter comparable with the particle size) squeeze between the particles under a specified average pressure gradient. The algorithm includes recent boundary-integral desingularization tools especially important for drop-solid and drop-drop interactions, the Hebeker representation for solid particle contributions, and unstructured surface triangulations with fixed topology. Multipole acceleration, with two levels of mesh node decomposition (entire drop/solid surfaces and "patches"), is a significant improvement over schemes used in previous, purely multidrop simulations; it remains efficient at very high resolutions ( 104- 105 triangular elements per surface) and has no lower limitation on the number of particles or drops. Such resolutions are necessary in the problem to alleviate lubrication difficulties, especially for near-critical squeezing conditions, as well as using ˜104 time steps and an iterative solution at each step, both for contrast and matching viscosities. Examples are shown for squeezing of 25-40 drops through an array of 9-14 solids, with the total volume fraction of 70% for particles and drops. The flow rates for the drop and continuous phases are calculated. Extensive convergence testing with respect to program parameters (triangulation, multipole truncation, etc.) is made.

18. Some analytical and numerical approaches to understanding trap counts resulting from pest insect immigration.

PubMed

Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei

2015-05-01

Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth.

19. Laboratory simulations of lidar returns from clouds - Experimental and numerical results

Zaccanti, Giovanni; Bruscaglioni, Piero; Gurioli, Massimo; Sansoni, Paola

1993-03-01

The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

20. Laboratory simulations of lidar returns from clouds: experimental and numerical results.

PubMed

Zaccanti, G; Bruscaglioni, P; Gurioli, M; Sansoni, P

1993-03-20

The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

1. Heat Transfer Enhancement for Finned-Tube Heat Exchangers with Vortex Generators: Experimental and Numerical Results

SciTech Connect

O'Brien, James Edward; Sohal, Manohar Singh; Huff, George Albert

2002-08-01

A combined experimental and numerical investigation is under way to investigate heat transfer enhancement techniques that may be applicable to large-scale air-cooled condensers such as those used in geothermal power applications. The research is focused on whether air-side heat transfer can be improved through the use of finsurface vortex generators (winglets,) while maintaining low heat exchanger pressure drop. A transient heat transfer visualization and measurement technique has been employed in order to obtain detailed distributions of local heat transfer coefficients on model fin surfaces. Pressure drop measurements have also been acquired in a separate multiple-tube row apparatus. In addition, numerical modeling techniques have been developed to allow prediction of local and average heat transfer for these low-Reynolds-number flows with and without winglets. Representative experimental and numerical results presented in this paper reveal quantitative details of local fin-surface heat transfer in the vicinity of a circular tube with a single delta winglet pair downstream of the cylinder. The winglets were triangular (delta) with a 1:2 height/length aspect ratio and a height equal to 90% of the channel height. Overall mean fin-surface Nusselt-number results indicate a significant level of heat transfer enhancement (average enhancement ratio 35%) associated with the deployment of the winglets with oval tubes. Pressure drop measurements have also been obtained for a variety of tube and winglet configurations using a single-channel flow apparatus that includes four tube rows in a staggered array. Comparisons of heat transfer and pressure drop results for the elliptical tube versus a circular tube with and without winglets are provided. Heat transfer and pressure-drop results have been obtained for flow Reynolds numbers based on channel height and mean flow velocity ranging from 700 to 6500.

2. Simulation of human atherosclerotic femoral plaque tissue: the influence of plaque material model on numerical results

PubMed Central

2015-01-01

Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large

3. Flow and transport in highly heterogeneous formations: 3. Numerical simulations and comparison with theoretical results

Janković, I.; Fiori, A.; Dagan, G.

2003-09-01

In parts 1 [, 2003] and 2 [, 2003] a multi-indicator model of heterogeneous formations is devised in order to solve flow and transport in highly heterogeneous formations. The isotropic medium is made up from circular (2-D) or spherical (3-D) inclusions of different conductivities K, submerged in a matrix of effective conductivity. This structure is different from the multi-Gaussian one, even for equal log conductivity distribution and integral scale. A snapshot of a two-dimensional plume in a highly heterogeneous medium of lognormal conductivity distribution shows that the model leads to a complex transport picture. The present study was limited, however, to investigating the statistical moments of ergodic plumes. Two approximate semianalytical solutions, based on a self-consistent model (SC) and on a first-order perturbation in the log conductivity variance (FO), are used in parts 1 and 2 in order to compute the statistical moments of flow and transport variables for a lognormal conductivity pdf. In this paper an efficient and accurate numerical procedure, based on the analytic-element method [, 1989], is used in order to validate the approximate results. The solution satisfies exactly the continuity equation and at high-accuracy the continuity of heads at inclusion boundaries. The dimensionless dependent variables depend on two parameters: the volume fraction n of inclusions in the medium and the log conductivity variance σY2. For inclusions of uniform radius, the largest n was 0.9 (2-D) and 0.7 (3-D), whereas the largest σY2 was equal to 10. The SC approximation underestimates the longitudinal Eulerian velocity variance for increasing n and increasing σY2 in 2-D and, to a lesser extent, in 3-D, as compared to numerical results. The FO approximation overestimates these variances, and these effects are larger in the transverse direction. The longitudinal velocity pdf is highly skewed and negative velocities are present at high σY2, especially in 2-D. The main

4. Numerical computation of the effective-one-body potential q using self-force results

Akcay, Sarp; van de Meent, Maarten

2016-03-01

The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two nonspinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: a (v ) , d ¯ (v ) , q (v ) . By generalizing the first law of mechanics for (nonspinning) black hole binaries to eccentric orbits, [A. Le Tiec, Phys. Rev. D 92, 084021 (2015).] recently obtained new expressions for d ¯(v ) and q (v ) in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential q (v ) by combining results from two independent numerical self-force codes. We determine q (v ) for inverse binary separations in the range 1 /1200 ≤v ≲1 /6 . Our computation thus provides the first-ever strong-field results for q (v ) . We also obtain d ¯ (v ) in our entire domain to a fractional accuracy of ≳10-8 . We find that our results are compatible with the known post-Newtonian expansions for d ¯(v ) and q (v ) in the weak field, and agree with previous (less accurate) numerical results for d ¯(v ) in the strong field.

5. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

2010-05-01

Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

6. Propagation of CMEs in the interplanetary medium: Numerical and analytical results

González-Esparza, J. A.; Cantó, J.; González, R. F.; Lara, A.; Raga, A. C.

2003-08-01

We study the propagation of coronal mass ejections (CMES) from near the Sun to 1 AU by comparing results from two different models: a 1-D, hydrodynamic, single-fluid, numerical model (González-Esparza et al., 2003a) and an analytical model to study the dynamical evolution of supersonic velocity's fluctuations at the base of the solar wind applied to the propagation of CMES (Cantó et al., 2002). Both models predict that a fast CME moves initially in the inner heliosphere with a quasi-constant velocity (which has an intermediate value between the initial CME velocity and the ambient solar wind velocity ahead) until a 'critical distance' at which the CME velocity begins to decelerate approaching to the ambient solar wind velocity. This critical distance depends on the characteristics of the CME (initial velocity, density and temperature) as well as of the ambient solar wind. Given typical parameters based on observations, this critical distance can vary from 0.3 to beyond 1 AU from the Sun. These results explain the radial evolution of the velocity of fast CMEs in the inner heliosphere inferred from interplanetary scintillation (IPS) observations (Manoharan et al., 2001, 2003, Tokumaru et al., 2003). On the other hand, the numerical results show that a fast CME and its associated interplanetary (IP) shock follow different heliocentric evolutions: the IP shock always propagates faster than its CME driver and the latter begins to decelerate well before the shock.

7. Theoretical and numerical results on effects of attenuation on correlation functions of ambient seismic noise

Liu, Xin; Ben-Zion, Yehuda

2013-09-01

We study analytically and numerically effects of attenuation on cross-correlation functions of ambient noise in a 2-D model with different attenuation constants between and outside a pair of stations. The attenuation is accounted for by quality factor Q(ω) and complex phase velocity. The analytical results are derived for isotropic far-field source distribution assuming the Fresnel approximation and mild attenuation. More general situations including cases with non-isotropic source distributions are examined with numerical simulations. The results show that homogeneous attenuation in the interstation regions produces symmetric amplitude decay of the causal and anticausal parts of the noise cross-correlation function. The attenuation between the receivers and far-field sources generates symmetric exponential amplitude decay and may also cause asymmetric reduction of the causal/anticausal parts that increases with frequency. This frequency dependence can be used to distinguish asymmetric amplitudes due to attenuation from frequency-independent asymmetry in noise correlations generated by non-isotropic source distribution. The attenuations both between and outside station pairs also produce phase shifts that could affect measurements of group and phase velocities. In terms of noise cross-spectra, the interstation attenuation is governed by Struve functions while the attenuation between the far-field sources and receivers is associated with exponential decay and the imaginary part of complex Bessel function. These results are fundamentally different from previous studies of attenuated coherency that append the Bessel function with an exponential decay that depends on the interstation distance.

8. Numerical Study of Equilibrium, Stability, and Advanced Resistive Wall Mode Feedback Algorithms on KSTAR

Katsuro-Hopkins, Oksana; Sabbagh, S. A.; Bialek, J. M.; Park, H. K.; Kim, J. Y.; You, K.-I.; Glasser, A. H.; Lao, L. L.

2007-11-01

Stability to ideal MHD kink/ballooning modes and the resistive wall mode (RWM) is investigated for the KSTAR tokamak. Free-boundary equilibria that comply with magnetic field coil current constraints are computed for monotonic and reversed shear safety factor profiles and H-mode tokamak pressure profiles. Advanced tokamak operation at moderate to low plasma internal inductance shows that a factor of two improvement in the plasma beta limit over the no-wall beta limit is possible for toroidal mode number of unity. The KSTAR conducting structure, passive stabilizers, and in-vessel control coils are modeled by the VALEN-3D code and the active RWM stabilization performance of the device is evaluated using both standard and advanced feedback algorithms. Steady-state power and voltage requirements for the system are estimated based on the expected noise on the RWM sensor signals. Using NSTX experimental RWM sensors noise data as input, a reduced VALEN state-space LQG controller is designed to realistically assess KSTAR stabilization system performance.

9. Swinging Atwood Machine: Experimental and numerical results, and a theoretical study

Pujol, O.; Pérez, J. P.; Ramis, J. P.; Simó, C.; Simon, S.; Weil, J. A.

2010-06-01

A Swinging Atwood Machine ( SAM) is built and some experimental results concerning its dynamic behaviour are presented. Experiments clearly show that pulleys play a role in the motion of the pendulum, since they can rotate and have non-negligible radii and masses. Equations of motion must therefore take into account the moment of inertia of the pulleys, as well as the winding of the rope around them. Their influence is compared to previous studies. A preliminary discussion of the role of dissipation is included. The theoretical behaviour of the system with pulleys is illustrated numerically, and the relevance of different parameters is highlighted. Finally, the integrability of the dynamic system is studied, the main result being that the machine with pulleys is non-integrable. The status of the results on integrability of the pulley-less machine is also recalled.

10. Control of Boolean networks: hardness results and algorithms for tree structured networks.

PubMed

Akutsu, Tatsuya; Hayashida, Morihiro; Ching, Wai-Ki; Ng, Michael K

2007-02-21

Finding control strategies of cells is a challenging and important problem in the post-genomic era. This paper considers theoretical aspects of the control problem using the Boolean network (BN), which is a simplified model of genetic networks. It is shown that finding a control strategy leading to the desired global state is computationally intractable (NP-hard) in general. Furthermore, this hardness result is extended for BNs with considerably restricted network structures. These results justify existing exponential time algorithms for finding control strategies for probabilistic Boolean networks (PBNs). On the other hand, this paper shows that the control problem can be solved in polynomial time if the network has a tree structure. Then, this algorithm is extended for the case where the network has a few loops and the number of time steps is small. Though this paper focuses on theoretical aspects, biological implications of the theoretical results are also discussed.

11. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

SciTech Connect

Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

2010-06-15

A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

12. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

Fossum, Kristian; Mannseth, Trond

2014-11-01

We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I.

13. Noninvasive assessment of mitral inertness: clinical results with numerical model validation

NASA Technical Reports Server (NTRS)

Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

2001-01-01

Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

14. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

2016-10-01

Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

15. Optimization of SiO2-TiNxOy-Cu interference absorbers: numerical and experimental results

Lazarov, Michel P.; Sizmann, R.; Frei, Ulrich

1993-10-01

SiO2 - TiNxOy-Cu absorbers were prepared with activated reactive evaporation (ARE). The deposition parameters for the ARE process were adjusted according to the results of the numerical optimizations by a genetic algorithm. We present spectral reflectance, calorimetric and grazing incidence X-ray reflection (GXR) measurements. Best coatings for applications as selective absorber in the range of T equals 100 (DOT)(DOT)(DOT) 200 degree(s)C exhibit a solar absorptance of 0.94 and a near normal emittance of 0.044 at 100 degree(s)C. This emittance is correlated with the hemispherical emittance of 0.061 obtained from calorimetric measurements at 200 degree(s)C. First results on lifetime studies show that the coatings are thermally stable under vacuum up to 400 degree(s)C. The SiO2 film passivates the absorber, a substantial slow down of degradation in dry air is observed. Our tests demonstrate that the coating will withstand break down in cooling fluid and vacuum if mounted in an evacuated collector.

16. Re-Computation of Numerical Results Contained in NACA Report No. 496

NASA Technical Reports Server (NTRS)

Perry, Boyd, III

2015-01-01

An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

17. Carbon fiber composites inspection and defect characterization using active infrared thermography: numerical simulations and experimental results.

PubMed

Fernandes, Henrique; Zhang, Hai; Figueiredo, Alisson; Ibarra-Castanedo, Clemente; Guimarares, Gilmar; Maldague, Xavier

2016-12-01

Composite materials are widely used in the aeronautic industry. One of the reasons is because they have strength and stiffness comparable to metals, with the added advantage of significant weight reduction. Infrared thermography (IT) is a safe nondestructive testing technique that has a fast inspection rate. In active IT, an external heat source is used to stimulate the material being inspected in order to generate a thermal contrast between the feature of interest and the background. In this paper, carbon-fiber-reinforced polymers are inspected using IT. More specifically, carbon/PEEK (polyether ether ketone) laminates with square Kapton inserts of different sizes and at different depths are tested with three different IT techniques: pulsed thermography, vibrothermography, and line scan thermography. The finite element method is used to simulate the pulsed thermography experiment. Numerical results displayed a very good agreement with experimental results.

18. Asymptotic expansion for stellarator equilibria with a non-planar magnetic axis: Numerical results

Freidberg, Jeffrey; Cerfon, Antoine; Parra, Felix

2012-10-01

We have recently presented a new asymptotic expansion for stellarator equilibria that generalizes the classic Greene-Johnson expansion [1] to allow for 3D equilibria with a non-planar magnetic axis [2]. Our expansion achieves the two goals of reducing the complexity of the three-dimensional MHD equilibrium equations and of describing equilibria in modern stellarator experiments. The end result of our analysis is a set of two coupled partial differential equations for the plasma pressure and the toroidal vector potential which fully determine the stellarator equilibrium. Both equations are advection equations in which the toroidal angle plays the role of time. We show that the method of characteristics, following magnetic field lines, is a convenient way of solving these equations, avoiding the difficulties associated with the periodicity of the solution in the toroidal angle. By combining the method of characteristics with Green's function integrals for the evaluation of the magnetic field due to the plasma current, we obtain an efficient numerical solver for our expansion. Numerical equilibria thus calculated will be given.[4pt] [1] J.M. Greene and J.L. Johnson, Phys. Fluids 4, 875 (1961)[0pt] [2] A.J. Cerfon, J.P. Freidberg, and F.I. Parra, Bull. Am. Phys. Soc. 56, 16 GP9.00081 (2011)

19. Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia

Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis

2014-05-01

A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.

20. The vertical age profile in sea ice: Theory and numerical results

Lietaer, Olivier; Deleersnijder, Eric; Fichefet, Thierry; Vancoppenolle, Martin; Comblen, Richard; Bouillon, Sylvain; Legat, Vincent

The sea ice age is an interesting diagnostic tool because it may provide a proxy for the sea ice thickness and is easier to infer from observations than the sea ice thickness. Remote sensing algorithms and modeling approaches proposed in the literature indicate significant methodological uncertainties, leading to different ice age values and physical interpretations. In this work, we focus on the vertical age distribution in sea ice. Based on the age theory developed for marine modeling, we propose a vertically-variable sea ice age definition which gives a measure of the time elapsed since the accretion of the ice particle under consideration. An analytical solution is derived from Stefan's law for a horizontally homogeneous ice layer with a periodic ice thickness seasonal cycle. Two numerical methods to solve the age equation are proposed. In the first one, the domain is discretized adaptively in space thanks to Lagrangian particles in order to capture the age profile and its discontinuities. The second one focuses on the mean age of the ice using as few degrees of freedom as possible and is based on an Arbitrary Lagrangian-Eulerian (ALE) spatial discretization and the finite element method. We observe an excellent agreement between the Lagrangian particles and the analytical solution. The mean value and the standard deviation of the finite element solution agree with the analytical solution and a linear approximation is found to represent the age profile the better, the older the ice gets. Both methods are finally applied to a stand-alone thermodynamic sea ice model of the Arctic. Computing the vertically-averaged ice age reduces by a factor of about 2 the simulated ice age compared to the oldest particle of the ice columns. A high correlation is found between the ice thickness and the age of the oldest particle. However, whether or not this will remain valid once ice dynamics is included should be investigated. In addition, the present study, based on

1. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

2014-11-01

We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

2. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

NASA Technical Reports Server (NTRS)

Carrier, Alain C.; Aubrun, Jean-Noel

1993-01-01

New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

3. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

2015-04-01

Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

4. Experimental and numerical results for CO2 concentration and temperature profiles in an occupied room

Cotel, Aline; Junghans, Lars; Wang, Xiaoxiang

2014-11-01

In recent years, a recognition of the scope of the negative environmental impact of existing buildings has spurred academic and industrial interest in transforming existing building design practices and disciplinary knowledge. For example, buildings alone consume 72% of the electricity produced annually in the United States; this share is expected to rise to 75% by 2025 (EPA, 2009). Significant reductions in overall building energy consumption can be achieved using green building methods such as natural ventilation. An office was instrumented on campus to acquire CO2 concentrations and temperature profiles at multiple locations while a single occupant was present. Using openFOAM, numerical calculations were performed to allow for comparisons of the CO2 concentration and temperature profiles for different ventilation strategies. Ultimately, these results will be the inputs into a real time feedback control system that can adjust actuators for indoor ventilation and utilize green design strategies. Funded by UM Office of Vice President for Research.

5. Lateral and axial resolutions of an angle-deviation microscope for different numerical apertures: experimental results

Chiu, Ming-Hung; Lai, Chin-Fa; Tan, Chen-Tai; Lin, Yi-Zhi

2011-03-01

This paper presents a study of the lateral and axial resolutions of a transmission laser-scanning angle-deviation microscope (TADM) with different numerical aperture (NA) values. The TADM is based on geometric optics and surface plasmon resonance principles. The surface height is proportional to the phase difference between two marginal rays of the test beam, which is passed through the test medium. We used common-path heterodyne interferometry to measure the phase difference in real time, and used a personal computer to calculate and plot the surface profile. The experimental results showed that the best lateral and axial resolutions for NA = 0.41 were 0.5 μm and 3 nm, respectively, and the lateral resolution breaks through the diffraction limits.

6. Numerical simulation and experimental results of filament wound CFRP tubes tested under biaxial load

Amaldi, A.; Giannuzzi, M.; Marchetti, M.; Miliozzi, A.

1992-10-01

The analysis of angle ply carbon/epoxy laminated composites when subjected to uniaxial and biaxial stresses is presented. Three classes of interwoven pattern filament wound cylindrical specimens are studied in order to compare the influence of angle on the mechanical behavior of the laminate. Three dimensional finite element and thin shell analyses were first applied to the problem in order to predict global elastic behavior of specimens subjected to uniaxial loads. Different failure criteria were then adopted to investigate specimens' failure and experimental tests were carried out for a comparison with numerical results. Biaxial stress conditions were produced by applying combinations of internal pressure and axial tensile and compressive loads to the specimens.

7. Dynamics of Tachyon Fields and Inflation - Comparison of Analytical and Numerical Results with Observation

Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.

2016-06-01

The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.

8. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

PubMed

Marom, Gil; Bluestein, Danny

2016-01-01

This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

9. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

PubMed Central

Marom, Gil; Bluestein, Danny

2016-01-01

Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

10. Interacting steps with finite-range interactions: Analytical approximation and numerical results

Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

2013-05-01

We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

11. Ultimate tensile strength of embedded I-sections: a comparison of experimental and numerical results

Heristchian, Mahmoud; Pourakbar, Pouyan; Imeni, Saeed; Ramezani, M. Reza Adib

2014-12-01

Exposed baseplates together with anchor bolts are the customary method of connection of steel structures to the concrete footings. Post-Kobe studies revealed that the embedded column bases respond better to the earthquake uplift forces. The embedded column bases also, offer higher freedom in achieving the required strength, rigidity and ductility. The paper presents the results of the pullout failure of three embedded IPE140 sections, tested under different conditions. The numerical models are then, generated in Abaqus 6.10-1 software. It is concluded that, the steel profiles could be directly anchored in concrete without using anchor bolts as practiced in the exposed conventional column bases. Such embedded column bases can develop the required resistance against pullout forces at lower constructional costs.

12. Effects of boundary conditions and partial drainage on cyclic simple shear test results - a numerical study

Wang, Bin; Popescu, Radu; Prevost, Jean H.

2004-08-01

Owing to imperfect boundary conditions in laboratory soil tests and the possibility of water diffusion inside the soil specimen in undrained tests, the assumption of uniform stress/strain over the sample is not valid. This study presents a qualitative assessment of the effects of non-uniformities in stresses and strains, as well as effects of water diffusion within the soil sample on the global results of undrained cyclic simple shear tests. The possible implications of those phenomena on the results of liquefaction strength assessment are also discussed. A state-of-the-art finite element code for transient analysis of multi-phase systems is used to compare results of the so-called element tests (numerical constitutive experiments assuming uniform stress/strain/pore pressure distribution throughout the sample) with results of actual simulations of undrained cyclic simple shear tests using a finite element mesh and realistic boundary conditions. The finite element simulations are performed under various conditions, covering the entire range of practical situations: (1) perfectly drained soil specimen with constant volume, (2) perfectly undrained specimen, and (3) undrained test with possibility of water diffusion within the sample. The results presented here are restricted to strain-driven tests performed for a loose uniform fine sand with relative density Dr=40%. Effects of system compliance in undrained laboratory simple shear tests are not investigated here. Copyright

13. A treatment algorithm for patients with large skull bone defects and first results.

PubMed

Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter

2011-09-01

Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses.

14. Electron Beam Return-Current Losses in Solar Flares: Initial Comparison of Analytical and Numerical Results

NASA Technical Reports Server (NTRS)

Holman, Gordon

2010-01-01

Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.

15. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

SciTech Connect

Swiler, Laura Painton; Eldred, Michael Scott

2009-09-01

This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

16. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

NASA Technical Reports Server (NTRS)

Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

2009-01-01

During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

17. Algorithm development

NASA Technical Reports Server (NTRS)

Barth, Timothy J.; Lomax, Harvard

1987-01-01

The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

18. Comparison Between Numerical and Experimental Results on Mechanical Stirrer and Bubbling in a Cylindrical Tank - 13047

SciTech Connect

Lima da Silva, M.; Sauvage, E.; Brun, P.; Gagnoud, A.; Fautrelle, Y.; Riva, R.

2013-07-01

The process of vitrification in a cold crucible heated by direct induction is used in the fusion of oxides. Its feature is the production of high-purity materials. The high-level of purity of the molten is achieved because this melting technique excludes the contamination of the charge by the crucible. The aim of the present paper is to analyze the hydrodynamic of the vitrification process by direct induction, with the focus in the effects associated with the interaction between the mechanical stirrer and bubbling. Considering the complexity of the analyzed system and the goal of the present work, we simplified the system by not taking into account the thermal and electromagnetic phenomena. Based in the concept of hydraulic similitude, we performed an experimental study and a numerical modeling of the simplified model. The results of these two studies were compared and showed a good agreement. The results presented in this paper in conjunction with the previous work contribute to a better understanding of the hydrodynamics effects resulting from the interaction between the mechanical stirrer and air bubbling in the cold crucible heated by direct induction. Further works will take into account thermal and electromagnetic phenomena in the presence of mechanical stirrer and air bubbling. (authors)

19. A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm

DTIC Science & Technology

1991-07-01

MUSIC ALGORITHM (U) by L.E. Montbrland go I July 1991 CRC REPORT NO. 1438 Ottawa I* Government of Canada Gouvsrnweient du Canada I o DParunnt of...FINDING RESULTS FROM AN FFT PEAK IDENTIFICATION TECHNIQUE WITH THOSE FROM THE MUSIC ALGORITHM (U) by L.E. Montbhrand CRC REPORT NO. 1438 July 1991...Ottawa A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm L.E. Montbriand Abstract A

20. Newest Results from the Investigation of Polymer-Induced Drag Reduction through Direct Numerical Simulation

Dimitropoulos, Costas D.; Beris, Antony N.; Sureshkumar, R.; Handler, Robert A.

1998-11-01

This work continues our attempts to elucidate theoretically the mechanism of polymer-induced drag reduction through direct numerical simulations of turbulent channel flow, using an independently evaluated rheological model for the polymer stress. Using appropriate scaling to accommodate effects due to viscoelasticity reveals that there exists a great consistency in the results for different combinations of the polymer concentration and chain extension. This helps demonstrate that our obervations are applicable to very dilute systems, currently not possible to simulate. It also reinforces the hypothesis that one of the prerequisites for the phenomenon of drag reduction is sufficiently enhanced extensional viscosity, corresponding to the level of intensity and duration of extensional rates typically encountered during the turbulent flow. Moreover, these results motivate a study of the turbulence structure at larger Reynolds numbers and for different periodic computational cell sizes. In addition, the Reynolds stress budgets demonstrate that flow elasticity adversely affects the activities represented by the pressure-strain correlations, leading to a redistribution of turbulent kinetic energy amongst all directions. Finally, we discuss the influence of viscoelasticity in reducing the production of streamwise vorticity.

1. Experimental and numerical investigations of internal heat transfer in an innovative trailing edge blade cooling system: stationary and rotation effects, part 2: numerical results

Beniaiche, Ahmed; Ghenaiet, Adel; Carcasci, Carlo; Facchini, Bruno

2017-02-01

This paper presents a numerical validation of the aero-thermal study of a 30:1 scaled model reproducing an innovative trailing edge with one row of enlarged pedestals under stationary and rotating conditions. A CFD analysis was performed by means of commercial ANSYS-Fluent modeling the isothermal air flow and using k- ω SST turbulence model and an isothermal air flow for both static and rotating conditions (Ro up to 0.23). The used numerical model is validated first by comparing the numerical velocity profiles distribution results to those obtained experimentally by means of PIV technique for Re = 20,000 and Ro = 0-0.23. The second validation is based on the comparison of the numerical results of the 2D HTC maps over the heated plate to those of TLC experimental data, for a smooth surface for a Reynolds number = 20,000 and 40,000 and Ro = 0-0.23. Two-tip conditions were considered: open tip and closed tip conditions. Results of the average Nusselt number inside the pedestal ducts region are presented too. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.

2. Numerical Algorithms & Parallel Tasking.

DTIC Science & Technology

1985-09-12

senior personnel have been supported under this contract: Virginia Klema, principal investigator (3.5 months), Elizabeth Ducot (2.25 months), and George...CONCURRENT ENVIRONMENT Elizabeth R. Ducot The purpose of this note is twofold. The first is to present the mechanisms by which a user activates and describes

3. Static Analysis Numerical Algorithms

DTIC Science & Technology

2016-04-01

abstract domain provides (1) an abstract type to represent concrete program states, and (2) abstract functions to represent the effect of concrete ...state-changing actions. Rather than simulate the concrete program, abstract interpretation uses abstract domains to construct and simulate an...On the other hand, the abstraction does allow us to cheaply compute some kinds of information about the concrete program. In the example, we can

4. Tsunami Hazards along the Eastern Australian Coast from Potential Earthquakes: Results from Numerical Simulations

Xing, H. L.; Ding, R. W.; Yuen, D. A.

2015-08-01

Australia is surrounded by the Pacific Ocean and the Indian Ocean and, thus, may suffer from tsunamis due to its proximity to the subduction earthquakes around the boundary of Australian Plate. Potential tsunami risks along the eastern coast, where more and more people currently live, are numerically investigated through a scenario-based method to provide an estimation of the tsunami hazard in this region. We have chosen and calculated the tsunami waves generated at the New Hebrides Trench and the Puysegur Trench, and we further investigated the relevant tsunami hazards along the eastern coast and their sensitivities to various sea floor frictions and earthquake parameters (i.e. the strike, the dip and the slip angles and the earthquake magnitude/rupture length). The results indicate that the Puysegur trench possesses a seismic threat causing wave amplitudes over 1.5 m along the coast of Tasmania, Victoria, and New South Wales, and even reaching over 2.6 m at the regions close to Sydney, Maria Island, and Gabo Island for a certain worse case, while the cities along the coast of Queensland are potentially less vulnerable than those on the southeastern Australian coast.

5. Analysis of formation pressure test results in the Mount Elbert methane hydrate reservoir through numerical simulation

USGS Publications Warehouse

Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.

2011-01-01

Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.

6. Numerical Results for a Polytropic Cosmology Interpreted as a Dust Universe Producing Gravitational Waves

Klapp, J.; Cervantes-Cota, J.; Chauvet, P.

1990-11-01

7. Thermodiffusion in concentrated ferrofluids: Experimental and numerical results on magnetic thermodiffusion

SciTech Connect

Sprenger, Lisa Lange, Adrian; Odenbach, Stefan

2014-02-15

Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: S{sub T‖} = −0.152 K{sup −1} and S{sub T⊥} = −0.257 K{sup −1} at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ{sub ‖} and ξ{sub ⊥} used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can

8. Parallel Newton-Krylov-Schwarz algorithms for the three-dimensional Poisson-Boltzmann equation in numerical simulation of colloidal particle interactions

Hwang, Feng-Nan; Cai, Shang-Rong; Shao, Yun-Long; Wu, Jong-Shinn

2010-09-01

We investigate fully parallel Newton-Krylov-Schwarz (NKS) algorithms for solving the large sparse nonlinear systems of equations arising from the finite element discretization of the three-dimensional Poisson-Boltzmann equation (PBE), which is often used to describe the colloidal phenomena of an electric double layer around charged objects in colloidal and interfacial science. The NKS algorithm employs an inexact Newton method with backtracking (INB) as the nonlinear solver in conjunction with a Krylov subspace method as the linear solver for the corresponding Jacobian system. An overlapping Schwarz method as a preconditioner to accelerate the convergence of the linear solver. Two test cases including two isolated charged particles and two colloidal particles in a cylindrical pore are used as benchmark problems to validate the correctness of our parallel NKS-based PBE solver. In addition, a truly three-dimensional case, which models the interaction between two charged spherical particles within a rough charged micro-capillary, is simulated to demonstrate the applicability of our PBE solver to handle a problem with complex geometry. Finally, based on the result obtained from a PC cluster of parallel machines, we show numerically that NKS is quite suitable for the numerical simulation of interaction between colloidal particles, since NKS is robust in the sense that INB is able to converge within a small number of iterations regardless of the geometry, the mesh size, the number of processors. With help of an additive preconditioned Krylov subspace method NKS achieves parallel efficiency of 71% or better on up to a hundred processors for a 3D problem with 5 million unknowns.

9. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

2010-06-01

The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

10. A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results

NASA Technical Reports Server (NTRS)

Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)

2001-01-01

We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.

11. Numerical and experimental results on the spectral wave transfer in finite depth

Benassai, Guido

2016-04-01

Determination of the form of the one-dimensional surface gravity wave spectrum in water of finite depth is important for many scientific and engineering applications. Spectral parameters of deep water and intermediate depth waves serve as input data for the design of all coastal structures and for the description of many coastal processes. Moreover, the wave spectra are given as an input for the response and seakeeping calculations of high speed vessels in extreme sea conditions and for reliable calculations of the amount of energy to be extracted by wave energy converters (WEC). Available data on finite depth spectral form is generally extrapolated from parametric forms applicable in deep water (e.g., JONSWAP) [Hasselmann et al., 1973; Mitsuyasu et al., 1980; Kahma, 1981; Donelan et al., 1992; Zakharov, 2005). The present paper gives a contribution in this field through the validation of the offshore energy spectra transfer from given spectral forms through the measurement of inshore wave heights and spectra. The wave spectra on deep water were recorded offshore Ponza by the Wave Measurement Network (Piscopia et al.,2002). The field regressions between the spectral parameters, fp and the nondimensional energy with the fetch length were evaluated for fetch-limited sea conditions. These regressions gave the values of the spectral parameters for the site of interest. The offshore wave spectra were transfered from the measurement station offshore Ponza to a site located offshore the Gulf of Salerno. The offshore local wave spectra so obtained were transfered on the coastline with the TMA model (Bouws et al., 1985). Finally the numerical results, in terms of significant wave heights, were compared with the wave data recorded by a meteo-oceanographic station owned by Naples Hydrographic Office on the coastline of Salerno in 9m depth. Some considerations about the wave energy to be potentially extracted by Wave Energy Converters were done and the results were discussed.

12. Numerical results on noise-induced dynamics in the subthreshold regime for thermoacoustic systems

Gupta, Vikrant; Saurabh, Aditya; Paschereit, Christian Oliver; Kabiraj, Lipika

2017-03-01

Thermoacoustic instability is a serious issue in practical combustion systems. Such systems are inherently noisy, and hence the influence of noise on the dynamics of thermoacoustic instability is an aspect of practical importance. The present work is motivated by a recent report on the experimental observation of coherence resonance, or noise-induced coherence with a resonance-like dependence on the noise intensity as the system approaches the stability margin, for a prototypical premixed laminar flame combustor (Kabiraj et al., Phys. Rev. E, 4 (2015)). We numerically investigate representative thermoacoustic models for such noise-induced dynamics. Similar to the experiments, we study variation in system dynamics in response to variations in the noise intensity and in a critical control parameter as the systems approach their stability margins. The qualitative match identified between experimental results and observations in the representative models investigated here confirms that coherence resonance is a feature of thermoacoustic systems. We also extend the experimental results, which were limited to the case of subcritical Hopf bifurcation, to the case of supercritical Hopf bifurcation. We identify that the phenomenon has qualitative differences for the systems undergoing transition via subcritical and supercritical Hopf bifurcations. Two important practical implications are associated with the findings. Firstly, the increase in noise-induced coherence as the system approaches the onset of thermoacoustic instability can be considered as a precursor to the instability. Secondly, the dependence of noise-induced dynamics on the bifurcation type can be utilised to distinguish between subcritical and supercritical bifurcation prior to the onset of the instability.

13. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena

2015-01-01

We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

14. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena

2015-01-01

We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

15. Effects of heterogeneity in aquifer permeability and biomass on biodegradation rate calculations - Results from numerical simulations

USGS Publications Warehouse

Scholl, M.A.

2000-01-01

16. Recombination in liquid filled ionisation chambers with multiple charge carrier species: Theoretical and numerical results

Aguiar, P.; González-Castaño, D. M.; Gómez, F.; Pardo-Montero, J.

2014-10-01

Liquid-filled ionisation chambers (LICs) are used in radiotherapy for dosimetry and quality assurance. Volume recombination can be quite important in LICs for moderate dose rates, causing non-linearities in the dose rate response of these detectors, and needs to be corrected for. This effect is usually described with Greening and Boag models for continuous and pulsed radiation respectively. Such models assume that the charge is carried by two different species, positive and negative ions, each of those species with a given mobility. However, LICs operating in non-ultrapure mode can contain different types of electronegative impurities with different mobilities, thus increasing the number of different charge carriers. If this is the case, Greening and Boag models can be no longer valid and need to be reformulated. In this work we present a theoretical and numerical study of volume recombination in parallel-plate LICs with multiple charge carrier species, extending Boag and Greening models. Results from a recent publication that reported three different mobilities in an isooctane-filled LIC have been used to study the effect of extra carrier species on recombination. We have found that in pulsed beams the inclusion of extra mobilities does not affect volume recombination much, a behaviour that was expected because Boag formula for charge collection efficiency does not depend on the mobilities of the charge carriers if the Debye relationship between mobilities and recombination constant holds. This is not the case in continuous radiation, where the presence of extra charge carrier species significantly affects the amount of volume recombination.

17. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

NASA Technical Reports Server (NTRS)

Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

2013-01-01

The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

18. Passification based simple adaptive control of quadrotor attitude: Algorithms and testbed results

Tomashevich, Stanislav; Belyavskyi, Andrey; Andrievsky, Boris

2017-01-01

In the paper, the results of the Passification Method with the Implicit Reference Model (IRM) approach are applied for designing the simple adaptive controller for quadrotor attitude. The IRM design technique makes it possible to relax the matching condition, known for habitual MRAC systems, and leads to simple adaptive controllers, ensuring fast tuning the controller gains, high robustness with respect to nonlinearities in the control loop, to the external disturbances and the unmodeled plant dynamics. For experimental evaluation of the adaptive systems performance, the 2DOF laboratory setup has been created. The testbed allows to safely test new control algorithms in the laboratory area with a small space and promptly make changes in cases of failure. The testing results of simple adaptive control of quadrotor attitude are presented, demonstrating efficacy of the applied simple adaptive control method. The experiments demonstrate good performance quality and high adaptation rate of the simple adaptive control system.

19. Modeling the Fracturing of Rock by Fluid Injection - Comparison of Numerical and Experimental Results

Heinze, Thomas; Galvan, Boris; Miller, Stephen

2013-04-01

Fluid-rock interactions are mechanically fundamental to many earth processes, including fault zones and hydrothermal/volcanic systems, and to future green energy solutions such as enhanced geothermal systems and carbon capture and storage (CCS). Modeling these processes is challenging because of the strong coupling between rock fracture evolution and the consequent large changes in the hydraulic properties of the system. In this talk, we present results of a numerical model that includes poro-elastic plastic rheology (with hardening, softening, and damage), and coupled to a non-linear diffusion model for fluid pressure propagation and two-phase fluid flow. Our plane strain model is based on the poro- elastic plastic behavior of porous rock and is advanced with hardening, softening and damage using the Mohr- Coulomb failure criteria. The effective stress model of Biot (1944) is used for coupling the pore pressure and the rock behavior. Frictional hardening and cohesion softening are introduced following Vermeer and de Borst (1984) with the angle of internal friction and the cohesion as functions of the principal strain rates. The scalar damage coefficient is assumed to be a linear function of the hardening parameter. Fluid injection is modeled as a two phase mixture of water and air using the Richards equation. The theoretical model is solved using finite differences on a staggered grid. The model is benchmarked with experiments on the laboratory scale in which fluid is injected from below in a critically-stressed, dry sandstone (Stanchits et al. 2011). We simulate three experiments, a) the failure a dry specimen due to biaxial compressive loading, b) the propagation a of low pressure fluid front induced from the bottom in a critically stressed specimen, and c) the failure of a critically stressed specimen due to a high pressure fluid intrusion. Comparison of model results with the fluid injection experiments shows that the model captures most of the experimental

20. Water-waves on linear shear currents. A comparison of experimental and numerical results.

Simon, Bruno; Seez, William; Touboul, Julien; Rey, Vincent; Abid, Malek; Kharif, Christian

2016-04-01

Propagation of water waves can be described for uniformly sheared current conditions. Indeed, some mathematical simplifications remain applicable in the study of waves whether there is no current or a linearly sheared current. However, the widespread use of mathematical wave theories including shear has rarely been backed by experimental studies of such flows. New experimental and numerical methods were both recently developed to study wave current interactions for constant vorticity. On one hand, the numerical code can simulate, in two dimensions, arbitrary non-linear waves. On the other hand, the experimental methods can be used to generate waves with various shear conditions. Taking advantage of the simplicity of the experimental protocol and versatility of the numerical code, comparisons between experimental and numerical data are discussed and compared with linear theory for validation of the methods. ACKNOWLEDGEMENTS The DGA (Direction Générale de l'Armement, France) is acknowledged for its financial support through the ANR grant N° ANR-13-ASTR-0007.

1. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

NASA Technical Reports Server (NTRS)

Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

2014-01-01

This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

2. Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results

NASA Technical Reports Server (NTRS)

Lee, Nam C.; Parks, George K.

1992-01-01

A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.

3. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm

Susskind, J.

2015-12-01

A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and

4. Experimental and numerical results on a shear layer excited by a sound pulse

NASA Technical Reports Server (NTRS)

Maestrello, L.; Bayliss, A.; Turkel, E.

1979-01-01

The behavior of a sound in a jet was investigated. It is verified that the far-field acoustic power increased with flow velocity for the lower and medium frequency range. Experimentally, an attenuation at higher frequencies is also observed. This increase is found numerically to be due primarily to the interactions between the mean vorticity and the fluctuation velocities. Spectral decomposition of the real time data indicates that the power increase occurs in the low and middle frequency range, where the local instability waves have the largest spatial growth rate. The connection between this amplification and the local instability waves is discussed.

5. Ponderomotive stabilization of flute modes in mirrors Feedback control and numerical results

NASA Technical Reports Server (NTRS)

Similon, P. L.

1987-01-01

Ponderomotive stabilization of rigid plasma flute modes is numerically investigated by use of a variational principle, for a simple geometry, without eikonal approximation. While the near field of the studied antenna can be stabilizing, the far field has a small contribution only, because of large cancellation by quasi mode-coupling terms. The field energy for stabilization is evaluated and is a nonnegligible fraction of the plasma thermal energy. A new antenna design is proposed, and feedback stabilization is investigated. Their use drastically reduces power requirements.

6. White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results

NASA Technical Reports Server (NTRS)

Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.

2007-01-01

In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.

7. Estimation of geopotential from satellite-to-satellite range rate data: Numerical results

NASA Technical Reports Server (NTRS)

Thobe, Glenn E.; Bose, Sam C.

1987-01-01

A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.

8. Elasticity of mechanical oscillators in nonequilibrium steady states: Experimental, numerical, and theoretical results

Conti, Livia; De Gregorio, Paolo; Bonaldi, Michele; Borrielli, Antonio; Crivellari, Michele; Karapetyan, Gagik; Poli, Charles; Serra, Enrico; Thakur, Ram-Krishna; Rondoni, Lamberto

2012-06-01

We study experimentally, numerically, and theoretically the elastic response of mechanical resonators along which the temperature is not uniform, as a consequence of the onset of steady-state thermal gradients. Two experimental setups and designs are employed, both using low-loss materials. In both cases, we monitor the resonance frequencies of specific modes of vibration, as they vary along with variations of temperatures and of temperature differences. In one case, we consider the first longitudinal mode of vibration of an aluminum alloy resonator; in the other case, we consider the antisymmetric torsion modes of a silicon resonator. By defining the average temperature as the volume-weighted mean of the temperatures of the respective elastic sections, we find out that the elastic response of an object depends solely on it, regardless of whether a thermal gradient exists and, up to 10% imbalance, regardless of its magnitude. The numerical model employs a chain of anharmonic oscillators, with first- and second-neighbor interactions and temperature profiles satisfying Fourier's Law to a good degree. Its analysis confirms, for the most part, the experimental findings and it is explained theoretically from a statistical mechanics perspective with a loose notion of local equilibrium.

9. Interaction of a mantle plume and a segmented mid-ocean ridge: Results from numerical modeling

Georgen, Jennifer E.

2014-04-01

Previous investigations have proposed that changes in lithospheric thickness across a transform fault, due to the juxtaposition of seafloor of different ages, can impede lateral dispersion of an on-ridge mantle plume. The application of this “transform damming” mechanism has been considered for several plume-ridge systems, including the Reunion hotspot and the Central Indian Ridge, the Amsterdam-St. Paul hotspot and the Southeast Indian Ridge, the Cobb hotspot and the Juan de Fuca Ridge, the Iceland hotspot and the Kolbeinsey Ridge, the Afar plume and the ridges of the Gulf of Aden, and the Marion/Crozet hotspot and the Southwest Indian Ridge. This study explores the geodynamics of the transform damming mechanism using a three-dimensional finite element numerical model. The model solves the coupled steady-state equations for conservation of mass, momentum, and energy, including thermal buoyancy and viscosity that is dependent on pressure and temperature. The plume is introduced as a circular thermal anomaly on the bottom boundary of the numerical domain. The center of the plume conduit is located directly beneath a spreading segment, at a distance of 200 km (measured in the along-axis direction) from a transform offset with length 100 km. Half-spreading rate is 0.5 cm/yr. In a series of numerical experiments, the buoyancy flux of the modeled plume is progressively increased to investigate the effects on the temperature and velocity structure of the upper mantle in the vicinity of the transform. Unlike earlier studies, which suggest that a transform always acts to decrease the along-axis extent of plume signature, these models imply that the effect of a transform on plume dispersion may be complex. Under certain ranges of plume flux modeled in this study, the region of the upper mantle undergoing along-axis flow directed away from the plume could be enhanced by the three-dimensional velocity and temperature structure associated with ridge

10. The evolution of misoscale circulations in a downburst-producing storm and comparison to numerical results

NASA Technical Reports Server (NTRS)

Kessinger, C. J.; Wilson, J. W.; Weisman, M.; Klemp, J.

1984-01-01

Data from three NCAR radars are used in both single and dual Doppler analyses to trace the evolution of a June 30, 1982 Colorado convective storm containing downburst-type winds and strong vortices 1-2 km in diameter. The analyses show that a series of small circulations formed along a persistent cyclonic shear boundary; at times as many as three misocyclones were present with vertical vorticity values as large as 0.1/s using a 0.25 km grid interval. The strength of the circulations suggests the possibility of accompanying tornadoes or funnels, although none were observed. Dual-Doppler analyses show that strong, small-scale downdrafts develop in close proximity to the misocyclones. A midlevel mesocyclone formed in the same general region of the storm where the misocylones later developed. The observations are compared with numerical simulations from a three-dimensional cloud model initialized with sounding data from the same day.

11. The spectroscopic search for the trace aerosols in the planetary atmospheres - the results of numerical simulations

Blecka, Maria I.

2010-05-01

The passive remote spectrometric methods are important in examinations the atmospheres of planets. The radiance spectra inform us about values of thermodynamical parameters and composition of the atmospheres and surfaces. The spectral technology can be useful in detection of the trace aerosols like biological substances (if present) in the environments of the planets. We discuss here some of the aspects related to the spectroscopic search for the aerosols and dust in planetary atmospheres. Possibility of detection and identifications of biological aerosols with a passive InfraRed spectrometer in an open-air environment is discussed. We present numerically simulated, based on radiative transfer theory, spectroscopic observations of the Earth atmosphere. Laboratory measurements of transmittance of various kinds of aerosols, pollens and bacterias were used in modeling.

12. Three-Dimensional Numerical Simulations of Equatorial Spread F: Results and Observations in the Pacific Sector

NASA Technical Reports Server (NTRS)

Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.

2012-01-01

A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.

13. Estimation of catchment-scale evapotranspiration from baseflow recession data: Numerical model and practical application results

Szilagyi, Jozsef; Gribovszki, Zoltan; Kalicz, Peter

2007-03-01

SummaryBy applying a nonlinear reservoir approach for groundwater drainage, catchment-scale evapotranspiration (ET) during flow recessions can be expressed with the help of the lumped version of the water balance equation for the catchment. The attractiveness of the approach is that ET, in theory, can be obtained by the sole use of observed flow values for which relatively abundant and long records are available. A 2D finite element numerical model of subsurface flow in the unsaturated and saturated zones, capable of simulating moisture removal by vegetation, was first successfully employed to verify the water balance approach under ideal conditions. Subsequent practical applications over four catchments with widely varying climatic conditions however showed large disparities in comparison with monthly ET estimates of Morton's WREVAP model.

14. Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results

GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.

2013-03-01

While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

15. Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results.

DTIC Science & Technology

1981-06-01

also constructed synthetic sequences which generate products of two Catalan numbers and the Fibonacci [20] numbers. These will be presented in turn. One...WORDS (Continue on reveree side if neceemary md Identity by block nuiiber) Parsing, chart parsing, natural language processing, Earley’s algorithm 21...Words: Parsing, Chart Parsing, Natural Language Processing, Earley’s Algorithm V. This research was supported (in part) by the National Institutes of

16. Sound absorption of porous substrates covered by foliage: experimental results and numerical predictions.

PubMed

Ding, Lei; Van Renterghem, Timothy; Botteldooren, Dick; Horoshenkov, Kirill; Khan, Amir

2013-12-01

The influence of loose plant leaves on the acoustic absorption of a porous substrate is experimentally and numerically studied. Such systems are typical in vegetative walls, where the substrate has strong acoustical absorbing properties. Both experiments in an impedance tube and theoretical predictions show that when a leaf is placed in front of such a porous substrate, its absorption characteristics markedly change (for normal incident sound). Typically, there is an unaffected change in the low frequency absorption coefficient (below 250 Hz), an increase in the middle frequency absorption coefficient (500-2000 Hz) and a decrease in the absorption at higher frequencies. The influence of leaves becomes most pronounced when the substrate has a low mass density. A combination of the Biot's elastic frame porous model, viscous damping in the leaf boundary layers and plate vibration theory is implemented via a finite-difference time-domain model, which is able to predict accurately the absorption spectrum of a leaf above a porous substrate system. The change in the absorption spectrum caused by the leaf vibration can be modeled reasonably well assuming the leaf and porous substrate properties are uniform.

17. Mode analysis for a microwave driven plasma discharge: A comparison between analytical and numerical results

Szeremley, Daniel; Mussenbrock, Thomas; Brinkmann, Ralf Peter; Zimmermanns, Marc; Rolfes, Ilona; Eremin, Denis; Ruhr-University Bochum, Theoretical Electrical Engineering Team; Ruhr-University Bochum, Institute of Microwave Systems Team

2015-09-01

The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for depositing thin films of e.g. silicon oxid on the inner surface of such PET bottles. The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma and the related heating of electrons inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on a numerical approach. We study how modes of guided waves are propagating under different conditions, if at all. The authors gratefully acknowledge the financial support of the German Research Foundation (DFG) within the framework of the collaborative research centre TRR87.

18. Displacement-Based Seismic Design Procedure for Framed Buildings with Dissipative Braces Part II: Numerical Results

Mazza, Fabio; Vulcano, Alfonso

2008-07-01

For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.

19. Preliminary Results from Numerical Experiments on the Summer 1980 Heat Wave and Drought

NASA Technical Reports Server (NTRS)

Wolfson, N.; Atlas, R.; Sud, Y. C.

1985-01-01

During the summer of 1980, a prolonged heat wave and drought affected the United States. A preliminary set of experiments has been conducted to study the effect of varying boundary conditions on the GLA model simulation of the heat wave. Five 10-day numerical integrations with three different specifications of boundary conditions were carried out: a control experiment which utilized climatological boundary conditions, an SST experiment which utilized summer 1980 sea-surface temperatures in the North Pacific, but climatological values elsewhere, and a Soil Moisture experiment which utilized the values of Mintz-Serafini for the summer, 1980. The starting dates for the five forecasts were 11 June, 7 July, 21 July, 22 August, and 6 September of 1980. These dates were specifically chosen as days when a heat wave was already established in order to investigate the effect of soil moistures or North Pacific sea-surface temperatures on the model's ability to maintain the heat wave pattern. The experiments were evaluated in terms of the heat wave index for the South Plains, North Plains, Great Plains and the entire U.S. In addition a subjective comparison of map patterns has been performed.

20. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

NASA Technical Reports Server (NTRS)

Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

1988-01-01

Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

1. A comparison of two position estimate algorithms that use ILS localizer and DME information. Simulation and flight test results

NASA Technical Reports Server (NTRS)

Knox, C. E.; Vicroy, D. D.; Scanlon, C.

1984-01-01

Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.

2. Reynolds number effects on shock-wave turbulent boundary-layer interactions - A comparison of numerical and experimental results

NASA Technical Reports Server (NTRS)

Horstman, C. C.; Settles, G. S.; Vas, I. E.; Bogdonoff, S. M.; Hung, C. M.

1977-01-01

An experiment is described that tests and guides computations of a shock-wave turbulent boundary-layer interaction flow over a 20-deg compression corner at Mach 2.85. Numerical solutions of the time-averaged Navier-Stokes equations for the entire flow field, employing various turbulence models, are compared with the data. Each model is critically evaluated by comparisons with the details of the experimental data. Experimental results for the extent of upstream pressure influence and separation location are compared with numerical predictions for a wide range of Reynolds numbers and shock-wave strengths.

3. Multi-Country Experience in Delivering a Joint Course on Software Engineering--Numerical Results

ERIC Educational Resources Information Center

Budimac, Zoran; Putnik, Zoran; Ivanovic, Mirjana; Bothe, Klaus; Zdravkova, Katerina; Jakimovski, Boro

2014-01-01

A joint course, created as a result of a project under the auspices of the "Stability Pact of South-Eastern Europe" and DAAD, has been conducted in several Balkan countries: in Novi Sad, Serbia, for the last six years in several different forms, in Skopje, FYR of Macedonia, for two years, for several types of students, and in Tirana,…

4. Full-dimensional quantum calculations of vibrational spectra of six-atom molecules. I. Theory and numerical results

Yu, Hua-Gen

2004-02-01

Two quantum mechanical Hamiltonians have been derived in orthogonal polyspherical coordinates, which can be formed by Jacobi and/or Radau vectors etc., for the study of the vibrational spectra of six-atom molecules. The Hamiltonians are expressed in an explicit Hermitian form in the spatial representation. Their matrix representations are described in both full discrete variable representation (DVR) and mixed DVR/nondirect product finite basis representation (FBR) bases. The two-layer Lanczos iteration algorithm [H.-G. Yu, J. Chem. Phys. 117, 8190 (2002)] is employed to solve the eigenvalue problem of the system. A strategy regarding how to carry out the Hamiltonian-vector products for a high-dimensional problem is discussed. By exploiting the inversion symmetry of molecules, a unitary sequential 1D matrix-vector multiplication algorithm is proposed to perform the action of the Hamiltonian on the wavefunction in a symmetrically adapted DVR or FBR basis in the azimuthal angular variables. An application to the vibrational energy levels of the molecular hydrogen trimer (H2)3 in full dimension (12D) is presented. Results show that the rigid-H2 approximation can underestimate the binding energy of the trimer by 27%. Finally, it is demonstrated that the two-layer Lanczos algorithm is also capable of computing the eigenvectors of the system with minor effort.

5. Results from a limited area mesoscale numerical simulation for 10 April 1979

NASA Technical Reports Server (NTRS)

Kalb, M. W.

1985-01-01

Results are presented from a nine-hour limited area fine mesh (35-km) mesoscale model simulation initialized with SESAME-AVE I radiosonde data for Apr. 10, 1979 at 2100 GMT. Emphasis is on the diagnosis of mesoscale structure in the mass and precipitation fields. Along the Texas/Oklahoma border, independent of the short wave, convective precipitation formed several hours into the simulation and was organized into a narrow band suggestive of the observed April 10 squall line.

6. Numerical simulations of soft and hard turbulence - Preliminary results for two-dimensional convection

NASA Technical Reports Server (NTRS)

Deluca, E. E.; Werne, J.; Rosner, R.; Cattaneo, F.

1990-01-01

Results on the transition from soft to hard turbulence in simulations of two-dimensional Boussinesq convection are reported. The computed probability densities for temperature fluctuations are exponential in form in both soft and hard turbulence, unlike what is observed in experiments. In contrast, a change is obtained in the Nusselt number scaling on Rayleigh number in good agreement with the three-dimensional experiments.

7. Increased heat transfer to elliptical leading edges due to spanwise variations in the freestream momentum: Numerical and experimental results

NASA Technical Reports Server (NTRS)

Rigby, D. L.; Vanfossen, G. J.

1992-01-01

A study of the effect of spanwise variation in momentum on leading edge heat transfer is discussed. Numerical and experimental results are presented for both a circular leading edge and a 3:1 elliptical leading edge. Reynolds numbers in the range of 10,000 to 240,000 based on leading edge diameter are investigated. The surface of the body is held at a constant uniform temperature. Numerical and experimental results with and without spanwise variations are presented. Direct comparison of the two-dimensional results, that is, with no spanwise variations, to the analytical results of Frossling is very good. The numerical calculation, which uses the PARC3D code, solves the three-dimensional Navier-Stokes equations, assuming steady laminar flow on the leading edge region. Experimentally, increases in the spanwise-averaged heat transfer coefficient as high as 50 percent above the two-dimensional value were observed. Numerically, the heat transfer coefficient was seen to increase by as much as 25 percent. In general, under the same flow conditions, the circular leading edge produced a higher heat transfer rate than the elliptical leading edge. As a percentage of the respective two-dimensional values, the circular and elliptical leading edges showed similar sensitivity to span wise variations in momentum. By equating the root mean square of the amplitude of the spanwise variation in momentum to the turbulence intensity, a qualitative comparison between the present work and turbulent results was possible. It is shown that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.

8. A Result on the Computational Complexity of Heuristic Estimates for the A Algorithm.

DTIC Science & Technology

1983-01-01

compare these algorithms according to the criterion ’number of node expansions," which is discussed and general - ly accepted in the published...alla Teoria doi Problemi." i i 4 a..... . - 22 - P__ e£jnjs of AICA 1980, Bari, Italy, 177-193 (in Italian). [HNRaph68] Hart, Peter A., Nils J. Nilsson...Intoijigence, 15 (1980), pp. 241-254. [Kibler82] Kibler, Dennis. "Natural Generation of Admissible Heuristics." Technical Report TR-188, Information and

9. CRSP, numerical results for an electrical resistivity array to detect underground cavities

Amini, Amin; Ramazi, Hamidreza

2017-01-01

This paper is devoted to the application of the Combined Resistivity Sounding and Profiling electrode configuration (CRSP) to detect underground cavities. Electrical resistivity surveying is among the most favorite geophysical methods due to its nondestructive and economical properties in a wide range of geosciences. Several types of the electrode arrays are applied to detect different certain objectives. In one hand, the electrode array plays an important role in determination of output resolution and depth of investigations in all resistivity surveys. On the other hand, they have their own merits and demerits in terms of depth of investigations, signal strength, and sensitivity to resistivity variations. In this article several synthetic models, simulating different conditions of cavity occurrence, were used to examine the responses of some conventional electrode arrays and also CRSP array. The results showed that CRSP electrode configuration can detect the desired objectives with a higher resolution rather than some other types of arrays. Also a field case study was discussed in which electrical resistivity approach was conducted in Abshenasan expressway (Tehran, Iran) U-turn bridge site for detecting potential cavities and/or filling loose materials. The results led to detect an aqueduct tunnel passing beneath the study area.

10. Geometrical optics approach to the nematic liquid crystal grating: numerical results.

PubMed

Kosmopoulos, J A; Zenginoglou, H M

1987-05-01

The problem of the grating action of a periodically distorted nematic liquid crystal layer, in the geometrical optics ray approximation is considered, and a theory for the calculation of the fringe powers is proposed. A nonabsorbing nematic phase is assumed, and the direction of incidence is taken to be normal to the layer. The powers of the resulting diffraction fringes are related to the spatial and angular deviation of the rays propagating across the layer and to the perturbation of the phase of the wave associated with the ray. The theory is applied to the simple case of a harmonically distorted nematic layer. In the case of a weakly distorted nematic layer the results agree with the predictions of Carroll's model, where only even-order fringes are important. As the distortion becomes larger, odd-order fringes (with the exception of the first order) become equally important, and particularly those at relatively large orders (e.g., seven and nine) exhibit maxima greater than that of the even-order neighbors. Finally, the dependence of the powers of odd-order fringes on the distortion angle is quite different from that of the even-order fringes.

11. Numerical predictions and experimental results of a dry bay fire environment.

SciTech Connect

Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca

2003-11-01

The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.

12. Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending

NASA Technical Reports Server (NTRS)

Smeltzer, Stanley S., III; Lundgren, Eric

2006-01-01

A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.

13. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

PubMed

Wang, Zhan-Shan; Pan, Li-Bo

2014-03-01

The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.

14. Numerical Predictions and Experimental Results of Air Flow in a Smooth Quarter-Scale Nacelle

SciTech Connect

BLACK, AMALIA R.; SUO-ANTTILA, JILL M.; GRITZO, LOUIS A.; DISIMILE, PETER J.; TUCKER, JAMES R.

2002-06-01

Fires in aircraft engine nacelles must be rapidly suppressed to avoid loss of life and property. The design of new and retrofit suppression systems has become significantly more challenging due to the ban on production of Halon 1301 for environmental concerns. Since fire dynamics and the transport of suppressants within the nacelle are both largely determined by the available air flow, efforts to define systems using less effective suppressants greatly benefit from characterization of nacelle air flow fields. A combined experimental and computational study of nacelle air flow therefore has been initiated. Calculations have been performed using both CFD-ACE (a Computational Fluid Dynamics (CFD) model with a body-fitted coordinate grid) and WLCAN (a CFD-based fire field model with a Cartesian ''brick'' shaped grid). The flow conditions examined in this study correspond to the same Reynolds number as test data from the full-scale nacelle simulator at the 46 Test Wing. Pre-test simulations of a quarter-scale test fixture were performed using CFD-ACE and WLCAN prior to fabrication. Based on these pre-test simulations, a quarter-scale test fixture was designed and fabricated for the purpose of obtaining spatially-resolved measurements of velocity and turbulence intensity in a smooth nacelle. Post-test calculations have been performed for the conditions of the experiment and compared with experimental results obtained from the quarter-scale test fixture. In addition, several different simulations were performed to assess the sensitivity of the predictions to the grid size, to the turbulence models, and to the use of wall functions. In general, the velocity predictions show very good agreement with the data in the center of the channel but deviate near the walls. The turbulence intensity results tend to amplify the differences in velocity, although most of the trends are in agreement. In addition, there were some differences between WLCAN and CFD-ACE results in the angled

15. Experimental and numerical results for a generic axisymmetric single-engine afterbody with tails at transonic speeds

NASA Technical Reports Server (NTRS)

Burley, J. R., II; Carlson, J. R.; Henderson, W. P.

1986-01-01

Static pressure measurements were made on the afterbody, nozzle and tails of a generic single-engine axisymmetric fighter configuration. Data were recorded at Mach numbers of 0.6, 0.9, and 1.2. NPR was varied from 1.0 to 8.0 and angle of attack was varied from -3 deg. to 9 deg. Experimental data were compared with numerical results from two state-of-the-art computer codes.

16. New numerical results and novel effective string predictions for Wilson loops

Billó, M.; Caselle, M.; Pellegrini, R.

2012-01-01

We compute the prediction of the Nambu-Goto effective string model for a rectangular Wilson loop up to three loops. This is done through the use of an operatorial, first order formulation and of the open string analogues of boundary states. This result is interesting since there are universality theorems stating that the predictions up to three loops are common to all effective string models. To test the effective string prediction, we use a Montecarlo evaluation, in the 3 d Ising gauge model, of an observable (the ratio of two Wilson loops with the same perimeter) for which boundary effects are relatively small. Our simulation attains a level of precision which is sufficient to test the two-loop correction. The three-loop correction seems to go in the right direction, but is actually yet beyond the reach of our simulation, since its effect is comparable with the statistical errors of the latter.

17. Bathymetry Determination via X-Band Radar Data: A New Strategy and Numerical Results

PubMed Central

Serafino, Francesco; Lugni, Claudio; Borge, Jose Carlos Nieto; Zamparelli, Virginia; Soldovieri, Francesco

2010-01-01

This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity waves. This estimation technique is based on the correlation between the measured and the theoretical sea wave spectra and a simple analysis of the approach is performed through test cases with synthetic data. More in detail, the reliability of the estimate technique is verified through simulated data sets that are concerned with different values of bathymetry and surface currents for two types of sea spectrum: JONSWAP and Pierson-Moskowitz. The results show how the estimated bathymetry is fairly accurate for low depth values, while the estimate is less accurate as the bathymetry increases, due to a less significant role of the bathymetry on the sea surface waves as the water depth increases. PMID:22163565

18. Parallel algorithms for unconstrained optimizations by multisplitting

SciTech Connect

He, Qing

1994-12-31

In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

19. Optimizing the distribution of resources between enzymes of carbon metabolism can dramatically increase photosynthetic rate: a numerical simulation using an evolutionary algorithm.

PubMed

Zhu, Xin-Guang; de Sturler, Eric; Long, Stephen P

2007-10-01

The distribution of resources between enzymes of photosynthetic carbon metabolism might be assumed to have been optimized by natural selection. However, natural selection for survival and fecundity does not necessarily select for maximal photosynthetic productivity. Further, the concentration of a key substrate, atmospheric CO(2), has changed more over the past 100 years than the past 25 million years, with the likelihood that natural selection has had inadequate time to reoptimize resource partitioning for this change. Could photosynthetic rate be increased by altered partitioning of resources among the enzymes of carbon metabolism? This question is addressed using an "evolutionary" algorithm to progressively search for multiple alterations in partitioning that increase photosynthetic rate. To do this, we extended existing metabolic models of C(3) photosynthesis by including the photorespiratory pathway (PCOP) and metabolism to starch and sucrose to develop a complete dynamic model of photosynthetic carbon metabolism. The model consists of linked differential equations, each representing the change of concentration of one metabolite. Initial concentrations of metabolites and maximal activities of enzymes were extracted from the literature. The dynamics of CO(2) fixation and metabolite concentrations were realistically simulated by numerical integration, such that the model could mimic well-established physiological phenomena. For example, a realistic steady-state rate of CO(2) uptake was attained and then reattained after perturbing O(2) concentration. Using an evolutionary algorithm, partitioning of a fixed total amount of protein-nitrogen between enzymes was allowed to vary. The individual with the higher light-saturated photosynthetic rate was selected and used to seed the next generation. After 1,500 generations, photosynthesis was increased substantially. This suggests that the "typical" partitioning in C(3) leaves might be suboptimal for maximizing the light

20. Tsunami hazard assessment in the Ionian Sea due to potential tsunamogenic sources - results from numerical simulations

Tselentis, G.-A.; Stavrakakis, G.; Sokos, E.; Gkika, F.; Serpetsidaki, A.

2010-05-01

In spite of the fact that the great majority of seismic tsunami is generated in ocean domains, smaller basins like the Ionian Sea sometimes experience this phenomenon. In this investigation, we study the tsunami hazard associated with the Ionian Sea fault system. A scenario-based method is used to provide an estimation of the tsunami hazard in this region for the first time. Realistic faulting parameters related to four probable seismic sources, with tsunami potential, are used to model expected coseismic deformation, which is translated directly to the water surface and used as an initial condition for the tsunami propagation. We calculate tsunami propagation snapshots and mareograms for the four seismic sources in order to estimate the expected values of tsunami maximum amplitudes and arrival times at eleven tourist resorts along the Ionian shorelines. The results indicate that, from the four examined sources, only one possesses a seismic threat causing wave amplitudes up to 4 m at some tourist resorts along the Ionian shoreline.

1. Insight into collision zone dynamics from topography: numerical modelling results and observations

Bottrill, A. D.; van Hunen, J.; Allen, M. B.

2012-07-01

Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) deepening in the area of the back arc-basin after initial collision. This collisional mantle dynamic basin (CMDB) is caused by slab steepening drawing material away from the base of the overriding plate. Also during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate causes the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. This uplift and subsidence pattern correlates well with our modelled topography changes.

2. Insight into collision zone dynamics from topography: numerical modelling results and observations

Bottrill, A. D.; van Hunen, J.; Allen, M. B.

2012-11-01

Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB) is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.

3. Linking stress field deflection to basement structures in southern Ontario: Results from numerical modelling

Baird, Alan F.; McKinnon, Stephen D.

2007-03-01

Analysis of stress measurement data from the near-surface to crustal depths in southern Ontario show a misalignment between the direction of tectonic loading and the orientation of the major horizontal principal stress. The compressive stress field instead appears to be oriented sub-parallel to the major terrane boundaries such as the Grenville Front, the Central Metasedimentary Belt boundary zone and the Elzevir Frontenac boundary zone. This suggests that the stress field has been modified by these deep crustal scale deformation zones. In order to test this hypothesis, a geomechanical model was constructed using the three-dimensional discontinuum stress analysis code 3DEC. The model consists of a 45 km thick crust of southern Ontario in which the major crustal scale deformation zones are represented as discrete faults. Lateral velocity boundary conditions were applied to the sides of the model in the direction of tectonic loading in order to generate the horizontal compressive stress field. Modelling results show that for low strength (low friction angle and cohesion), fault slip causes the stress field to rotate toward the strike of the faults, consistent with the observed direction of misalignment with the tectonic loading direction. Observed distortions to the regional stress field may be explained by this relatively simple mechanism of slip on deep first-order structures in response to the neotectonic driving forces.

4. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

Cai, Fei; Ugai, Keizo

2003-06-01

This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

5. Restricted diffusion in a model acinar labyrinth by NMR: Theoretical and numerical results

Grebenkov, D. S.; Guillot, G.; Sapoval, B.

2007-01-01

A branched geometrical structure of the mammal lungs is known to be crucial for rapid access of oxygen to blood. But an important pulmonary disease like emphysema results in partial destruction of the alveolar tissue and enlargement of the distal airspaces, which may reduce the total oxygen transfer. This effect has been intensively studied during the last decade by MRI of hyperpolarized gases like helium-3. The relation between geometry and signal attenuation remained obscure due to a lack of realistic geometrical model of the acinar morphology. In this paper, we use Monte Carlo simulations of restricted diffusion in a realistic model acinus to compute the signal attenuation in a diffusion-weighted NMR experiment. We demonstrate that this technique should be sensitive to destruction of the branched structure: partial removal of the interalveolar tissue creates loops in the tree-like acinar architecture that enhance diffusive motion and the consequent signal attenuation. The role of the local geometry and related practical applications are discussed.

6. The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations

NASA Technical Reports Server (NTRS)

Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.

2003-01-01

We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.

7. Multi-temperature representation of electron velocity distribution functions. I. Fits to numerical results

SciTech Connect

Haji Abolhassani, A. A.; Matte, J.-P.

2012-10-15

Electron energy distribution functions are expressed as a sum of 6-12 Maxwellians or a sum of 3, but each multiplied by a finite series of generalized Laguerre polynomials. We fitted several distribution functions obtained from the finite difference Fokker-Planck code 'FPI'[Matte and Virmont, Phys. Rev. Lett. 49, 1936 (1982)] to these forms, by matching the moments, and showed that they can represent very well the coexistence of hot and cold populations, with a temperature ratio as high as 1000. This was performed for two types of problems: (1) the collisional relaxation of a minority hot component in a uniform plasma and (2) electron heat flow down steep temperature gradients, from a hot to a much colder plasma. We find that the multi-Maxwellian representation is particularly good if we accept complex temperatures and coefficients, and it is always better than the representation with generalized Laguerre polynomials for an equal number of moments. For the electron heat flow problem, the method was modified to also fit the first order anisotropy f{sub 1}(x,v,t), again with excellent results. We conclude that this multi-Maxwellian representation can provide a viable alternative to the finite difference speed or energy grid in kinetic codes.

8. Hydrodynamical simulation of detonations in superbursts. I. The hydrodynamical algorithm and some preliminary one-dimensional results

Noël, C.; Busegnies, Y.; Papalexandris, M. V.; Deledicque, V.; El Messoudi, A.

2007-08-01

Aims:This work presents a new hydrodynamical algorithm to study astrophysical detonations. A prime motivation of this development is the description of a carbon detonation in conditions relevant to superbursts, which are thought to result from the propagation of a detonation front around the surface of a neutron star in the carbon layer underlying the atmosphere. Methods: The algorithm we have developed is a finite-volume method inspired by the original MUSCL scheme of van Leer (1979). The algorithm is of second-order in the smooth part of the flow and avoids dimensional splitting. It is applied to some test cases, and the time-dependent results are compared to the corresponding steady state solution. Results: Our algorithm proves to be robust to test cases, and is considered to be reliably applicable to astrophysical detonations. The preliminary one-dimensional calculations we have performed demonstrate that the carbon detonation at the surface of a neutron star is a multiscale phenomenon. The length scale of liberation of energy is 106 times smaller than the total reaction length. We show that a multi-resolution approach can be used to solve all the reaction lengths. This result will be very useful in future multi-dimensional simulations. We present also thermodynamical and composition profiles after the passage of a detonation in a pure carbon or mixed carbon-iron layer, in thermodynamical conditions relevant to superbursts in pure helium accretor systems.

9. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

Razali, Azhani Mohd; Abdullah, Jaafar

2015-04-01

Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

10. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

SciTech Connect

Razali, Azhani Mohd Abdullah, Jaafar

2015-04-29

Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

11. A comparative study between experimental results and numerical predictions of multi-wall structural response to hypervelocity impact

NASA Technical Reports Server (NTRS)

Schonberg, William P.; Peck, Jeffrey A.

1992-01-01

Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.

12. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena

2015-01-01

A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. We have modified an improved version of the operational AIRS Version-6 retrieval algorithm for use with CrIS/ATMS. CrIS/ATMS products are of very good quality, and are comparable to, and consistent with, those of AIRS.

13. Diagnosis algorithm for leptospirosis in dogs: disease and vaccination effects on the serological results.

PubMed

Andre-Fontaine, G

2013-05-11

Leptospirosis is a common disease in dogs, despite their current vaccination. Vet surgeons may use a serological test to verify their clinical observations. The gold standard is the Microscopic Agglutination Test (MAT). After infection, the dog produces agglutinating antibodies against the lipopolyosidic antigens shared by the infectious strain but also, after vaccination, against the lipopolyosidic antigens shared by the serovars used in the bacterins (Leptospira species serovars Icterohaemorrhagiae and Canicola in most countries). MATs were performed in a group of 102 healthy field dogs and a group of 6 Canicola-challenged dogs. A diagnosis algorithm was constructed based on age, previous vaccinations, kinetics of the agglutinating antibodies after infection or vaccination and the delay after onset of the disease. This algorithm was applied to 169 well-documented sera (clinical and vaccine data) from 272 sick dogs with suspected leptospirosis. Totally, 102 dogs were vaccinated according to the usual vaccination scheme and 30 were not vaccinated. Leptospirosis was confirmed by MAT in 37/102 (36.2 per cent) vaccinated dogs and remained probable in 14 others (13.7 per cent), thus indicating the permanent exposure of dogs and the weakness of the protection offered by the current vaccines to pathogenic Leptospira.

14. First Results from the OMI Rotational Raman Scattering Cloud Pressure Algorithm

NASA Technical Reports Server (NTRS)

Joiner, Joanna; Vasilkov, Alexander P.

2006-01-01

We have developed an algorithm to retrieve scattering cloud pressures and other cloud properties with the Aura Ozone Monitoring Instrument (OMI). The scattering cloud pressure is retrieved using the effects of rotational Raman scattering (RRS). It is defined as the pressure of a Lambertian surface that would produce the observed amount of RRS consistent with the derived reflectivity of that surface. The independent pixel approximation is used in conjunction with the Lambertian-equivalent reflectivity model to provide an effective radiative cloud fraction and scattering pressure in the presence of broken or thin cloud. The derived cloud pressures will enable accurate retrievals of trace gas mixing ratios, including ozone, in the troposphere within and above clouds. We describe details of the algorithm that will be used for the first release of these products. We compare our scattering cloud pressures with cloud-top pressures and other cloud properties from the Aqua Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument. OMI and MODIS are part of the so-called A-train satellites flying in formation within 30 min of each other. Differences between OMI and MODIS are expected because the MODIS observations in the thermal infrared are more sensitive to the cloud top whereas the backscattered photons in the ultraviolet can penetrate deeper into clouds. Radiative transfer calculations are consistent with the observed differences. The OMI cloud pressures are shown to be correlated with the cirrus reflectance. This relationship indicates that OMI can probe through thin or moderately thick cirrus to lower lying water clouds.

15. Numerical Modeling of Anti-icing Systems and Comparison to Test Results on a NACA 0012 Airfoil

NASA Technical Reports Server (NTRS)

Al-Khalil, Kamel M.; Potapczuk, Mark G.

1993-01-01

A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experiment results were generally obtained for the surface temperature and the possibility for each runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.

16. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

NASA Technical Reports Server (NTRS)

Goodrich, John W.

1995-01-01

Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

17. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

Motheau, E.; Abraham, J.

2016-05-01

A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

18. Recent Experimental and Numerical Results on Turbulence, Flows and Global Stability Under Biasing in a Magnetized Linear Plasma

Gilmore, M.; Desjardins, T. R.; Fisher, D. M.

2016-10-01

Ongoing experiments and numerical modeling on the effects of flow shear on electrostatic turbulence in the presence of electrode biasing are being conducted in helicon plasmas in the linear HelCat (Helicon-Cathode) device. It is found that changes in flow shear, affected by electrode biasing through Er x Bz rotation, can strongly affect fluctuation dynamics, including fully suppressing the fluctuations or inducing chaos. The fundamental underlying instability, at least in the case of low magnetic field, is identified as a hybrid resistive drift-Kelvin-Helmholtz mode. At higher magnetic fields, multiple modes (resistive drift, rotation-driven interchange and/or Kelvin-Helmholtz) are present, and interact nonlinearly. At high positive electrode bias (V >10Te), a large amplitude, global instability, identified as the potential relaxation instability is observed. Numerical modeling is also being conducted, using a 3 fluid global Braginskii solver for no or moderate bias cases, and a 1D PIC code for high bias cases. Recent experimental and numerical results will be presented. Supported by U.S. National Science Foundation Award 1500423.

19. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

NASA Technical Reports Server (NTRS)

Wehrbein, W. M.; Leovy, C. B.

1981-01-01

A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

20. Theory of axially symmetric cusped focusing: numerical evaluation of a Bessoid integral by an adaptive contour algorithm

Kirk, N. P.; Connor, J. N. L.; Curtis, P. R.; Hobbs, C. A.

2000-07-01

A numerical procedure for the evaluation of the Bessoid canonical integral J({x,y}) is described. J({x,y}) is defined, for x and y real, by eq1 where J0(·) is a Bessel function of order zero. J({x,y}) plays an important role in the description of cusped focusing when there is axial symmetry present. It arises in the diffraction theory of aberrations, in the design of optical instruments and of highly directional microwave antennas and in the theory of image formation for high-resolution electron microscopes. The numerical procedure replaces the integration path along the real t axis with a more convenient contour in the complex t plane, thereby rendering the oscillatory integrand more amenable to numerical quadrature. The computations use a modified version of the CUSPINT computer code (Kirk et al 2000 Comput. Phys. Commun. at press), which evaluates the cuspoid canonical integrals and their first-order partial derivatives. Plots and tables of J({x,y}) and its zeros are presented for the grid -8.0≤x≤8.0 and -8.0≤y≤8.0. Some useful series expansions of J({x,y}) are also derived.

1. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like" Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

2015-01-01

AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6.Monthly mean August 2014 Version-6.22 AIRS and CrIS products agree reasonably well with OMPS, CERES, and witheach other. JPL plans to process AIRS and CrIS for many months and compare interannual differences. Updates to thecalibration of both CrIS and ATMS are still being finalized. We are also working with JPL to develop a joint AIRS/CrISlevel-1 to level-3 processing system using a still to be finalized Version-7 retrieval algorithm. The NASA Goddard DISCwill eventually use this system to reprocess all AIRS and recalibrated CrIS/ATMS. .

2. Simulation and experimental results for a phase retrieval-based algorithm for far-field beam steering and shaping

Roggemann, Michael C.; Welsh, Byron M.; Stone, Bradley R.; Su, Ting Ei

2002-02-01

Active laser-based electro-optical (EO) sensors on future aircraft and spacecraft will be used for a variety of missions and will be required to have a number of demanding technical characteristics. A key challenge to achieving these characteristics is the development of inexpensive, high degree of freedom optical wave front control devices, and the development of effective algorithms for controlling these devices. In this paper we present our research in the development of phase retrieval-based wave front control algorithms that can be used implemented with segmented liquid crystal-based wave front control devices. We have developed a wave front control algorithm that allows dynamic small-angle beam steering and shaping in the presence of an aberrating output window. Our approach is based on a phase retrieval algorithm to determine the optimal figure of a segmented wave front control device. Simulation and experimental results presented here show that this approach allows shaped far field patterns to be created and steered over small angles.

3. Construction of an extended invariant for an arbitrary ordinary differential equation with its development in a numerical integration algorithm.

PubMed

Fukuda, Ikuo; Nakamura, Haruki

2006-02-01

For an arbitrary ordinary differential equation (ODE), a scheme for constructing an extended ODE endowed with a time-invariant function is here proposed. This scheme enables us to examine the accuracy of the numerical integration of an ODE that may itself have had no invariant. These quantities are constructed by referring to the Nosé-Hoover molecular dynamics equation and its related conserved quantity. By applying this procedure to several molecular dynamics equations, the conventional conserved quantity individually defined in each dynamics can be reproduced in a uniform, generalized way; our concept allows a transparent outlook underlying these quantities and ideas. Developing the technique, for a certain class of ODEs we construct a numerical integrator that is not only explicit and symmetric, but preserves a unit Jacobian for a suitably defined extended ODE, which also provides an invariant. Our concept is thus to simply build a divergence-free extended ODE whose solution is just a lift-up of the original ODE, and to constitute an efficient integrator that preserves the phase-space volume on the extended system. We present precise discussions about the general mathematical properties of the integrator and provide specific conditions that should be incorporated for practical applications.

4. Arctic Mixed-Phase Cloud Properties from AERI Lidar Observations: Algorithm and Results from SHEBA

SciTech Connect

Turner, David D.

2005-04-01

A new approach to retrieve microphysical properties from mixed-phase Arctic clouds is presented. This mixed-phase cloud property retrieval algorithm (MIXCRA) retrieves cloud optical depth, ice fraction, and the effective radius of the water and ice particles from ground-based, high-resolution infrared radiance and lidar cloud boundary observations. The theoretical basis for this technique is that the absorption coefficient of ice is greater than that of liquid water from 10 to 13 μm, whereas liquid water is more absorbing than ice from 16 to 25 μm. MIXCRA retrievals are only valid for optically thin (τvisible < 6) single-layer clouds when the precipitable water vapor is less than 1 cm. MIXCRA was applied to the Atmospheric Emitted Radiance Interferometer (AERI) data that were collected during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment from November 1997 to May 1998, where 63% of all of the cloudy scenes above the SHEBA site met this specification. The retrieval determined that approximately 48% of these clouds were mixed phase and that a significant number of clouds (during all 7 months) contained liquid water, even for cloud temperatures as low as 240 K. The retrieved distributions of effective radii for water and ice particles in single-phase clouds are shown to be different than the effective radii in mixed-phase clouds.

5. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

NASA Technical Reports Server (NTRS)

Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

2012-01-01

Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

6. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

SciTech Connect

Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

2007-04-01

The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

7. Comparison of numerical simulation with experimental result for small scale one seater wing in ground effect (WIG) craft

Baharun, A. Tarmizi; Maimun, Adi; Ahmed, Yasser M.; Mobassher, M.; Nakisa, M.

2015-05-01

In this paper, three dimensional data and behavior of incompressible and steady air flow around a small scale Wing in Ground Effect Craft (WIG) were investigated and studied numerically then compared to the experimental result and also published data. This computational simulation (CFD) adopted two turbulence models, which were k-ɛ and k-ω in order to determine which model produces minimum difference to the experimental result of the small scale WIG tested in wind tunnel. Unstructured mesh was used in the simulation and data of drag coefficient (Cd) and lift coefficient (Cl) were obtained with angle of attack (AoA) of the WIG model as the parameter. Ansys ICEM was used for the meshing process while Ansys Fluent was used for solution. Aerodynamic forces, Cl, Cd and Cl/Cd along with fluid flow pattern of the small scale WIG craft was shown and discussed.

8. PINTEX Data: Numeric results from the Polarized Internal Target Experiments (PINTEX) at the Indiana University Cyclotron Facility

DOE Data Explorer

Meyer, H. O.

The PINTEX group studied proton-proton and proton-deuteron scattering and reactions between 100 and 500 MeV at the Indiana University Cyclotron Facility (IUCF). More than a dozen experiments made use of electron-cooled polarized proton or deuteron beams, orbiting in the 'Indiana Cooler' storage ring, and of a polarized atomic-beam target of hydrogen or deuterium in the path of the stored beam. The collaboration involved researchers from several midwestern universities, as well as a number of European institutions. The PINTEX program ended when the Indiana Cooler was shut down in August 2002. The website contains links to some of the numerical results, descriptions of experiments, and a complete list of publications resulting from PINTEX.

9. Validation and Analysis of Numerical Results for a Two-Pass Trapezoidal Channel With Different Cooling Configurations of Trailing Edge.

PubMed

Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H

2013-01-01

High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to

10. Role of the sample thickness on the performance of cholesteric liquid crystal lasers: Experimental, numerical, and analytical results

Sanz-Enguita, G.; Ortega, J.; Folcia, C. L.; Aramburu, I.; Etxebarria, J.

2016-02-01

We have studied the performance characteristics of a dye-doped cholesteric liquid crystal (CLC) laser as a function of the sample thickness. The study has been carried out both from the experimental and theoretical points of view. The theoretical model is based on the kinetic equations for the population of the excited states of the dye and for the power of light generated within the laser cavity. From the equations, the threshold pump radiation energy Eth and the slope efficiency η are numerically calculated. Eth is rather insensitive to thickness changes, except for small thicknesses. In comparison, η shows a much more pronounced variation, exhibiting a maximum that determines the sample thickness for optimum laser performance. The predictions are in good accordance with the experimental results. Approximate analytical expressions for Eth and η as a function of the physical characteristics of the CLC laser are also proposed. These expressions present an excellent agreement with the numerical calculations. Finally, we comment on the general features of CLC layer and dye that lead to the best laser performance.

11. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

PubMed

Pancheliuga, V A; Pancheliuga, M S

2013-01-01

In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes.

12. Usefulness of a metal artifact reduction algorithm for orthopedic implants in abdominal CT: phantom and clinical study results.

PubMed

Jeong, Seonji; Kim, Se Hyung; Hwang, Eui Jin; Shin, Cheong-Il; Han, Joon Koo; Choi, Byung Ihn

2015-02-01

OBJECTIVE. The purpose of this study was to evaluate the usefulness of a metal artifact reduction (MAR) algorithm for orthopedic prostheses in phantom and clinical CT. MATERIALS AND METHODS. An agar phantom with two sets of spinal screws was scanned at various tube voltage (80-140 kVp) and tube current-time (34-1032 mAs) settings. The orthopedic MAR algorithm was combined with filtered back projection (FBP) or iterative reconstruction. The mean SDs in three ROIs were compared among four datasets (FBP, iterative reconstruction, FBP with orthopedic MAR, and iterative reconstruction with orthopedic MAR). For the clinical study, the mean SDs of three ROIs and 4-point scaled image quality in 52 patients with metallic orthopedic prostheses were compared between CT images acquired with and without orthopedic MAR. The presence and type of image quality improvement with orthopedic MAR and the presence of orthopedic MAR-related new artifacts were also analyzed. RESULTS. In the phantom study, the mean SD with orthopedic MAR was significantly lower than that without orthopedic MAR regardless of dose settings and reconstruction algorithms (FBP versus iterative reconstruction). The mean SD near the metallic prosthesis in 52 patients was significantly lower on CT images with orthopedic MAR (28.04 HU) than those without it (49.21 HU). Image quality regarding metallic artifact was significantly improved with orthopedic MAR (rating of 2.60 versus 1.04). Notable reduction of metallic artifacts and better depiction of abdominal organs were observed in 45 patients. Diagnostic benefit was achieved in six patients, but orthopedic MAR-related new artifacts were seen in 30 patients. CONCLUSION. Use of the orthopedic MAR algorithm significantly reduces metal artifacts in CT of both phantoms and patients and has potential for improving diagnostic performance in patients with severe metallic artifacts.

13. Active compensation of aperture discontinuities for WFIRST-AFTA: analytical and numerical comparison of propagation methods and preliminary results with a WFIRST-AFTA-like pupil

Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi

2016-03-01

The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in

14. Evidence-based algorithm for diagnosis and assessment in psoriatic arthritis: results by Italian DElphi in psoriatic Arthritis (IDEA).

PubMed

Lapadula, G; Marchesoni, A; Salaffi, F; Ramonda, R; Salvarani, C; Punzi, L; Costa, L; Caso, F; Simone, D; Baiocchi, G; Scioscia, C; Di Carlo, M; Scarpa, R; Ferraccioli, G

2016-12-16

Psoriatic arthritis (PsA) is a chronic inflammatory disease involving skin, peripheral joints, entheses, and axial skeleton. The disease is frequently associated with extrarticular manifestations (EAMs) and comorbidities. In order to create a protocol for PsA diagnosis and global assessment of patients with an algorithm based on anamnestic, clinical, laboratory and imaging procedures, we established a DElphi study on a national scale, named Italian DElphi in psoriatic Arthritis (IDEA). After a literature search, a Delphi poll, involving 52 rheumatologists, was performed. On the basis of the literature search, 202 potential items were identified. The steering committee planned at least two Delphi rounds. In the first Delphi round, the experts judged each of the 202 items using a score ranging from 1 to 9 based on its increasing clinical relevance. The questions posed to experts were How relevant is this procedure/observation/sign/symptom for assessment of a psoriatic arthritis patient? Proposals of additional items, not included in the questionnaire, were also encouraged. The results of the poll were discussed by the Steering Committee, which evaluated the necessity for removing selected procedures or adding additional ones, according to criteria of clinical appropriateness and sustainability. A total of 43 recommended diagnosis and assessment procedures, recognized as items, were derived by combination of the Delphi survey and two National Expert Meetings, and grouped in different areas. Favourable opinion was reached in 100% of cases for several aspects covering the following areas: medical (familial and personal) history, physical evaluation, imaging tool, second level laboratory tests, disease activity measurement and extrarticular manifestations. After performing PsA diagnosis, identification of specific disease activity scores and clinimetric approaches were suggested for assessing the different clinical subsets. Further, results showed the need for

15. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure

Hand, J. W.; Li, Y.; Hajnal, J. V.

2010-02-01

Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB <= 2 W kg-1 (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR10g and average foetal temperature are within international safety limits. For continuous RF exposure at SARMWB = 2 W kg-1 over periods of 7.5 min or longer, a maximum local foetal temperature >38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.

16. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure.

PubMed

Hand, J W; Li, Y; Hajnal, J V

2010-02-21

Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SAR(MWB) < or = 2 W kg(-1) (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR(10 g) and average foetal temperature are within international safety limits. For continuous RF exposure at SAR(MWB) = 2 W kg(-1) over periods of 7.5 min or longer, a maximum local foetal temperature >38 degrees C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SAR(MWB) = 2 W kg(-1), some local SAR(10g) values in the mother's trunk and extremities exceed recommended limits.

17. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

NASA Technical Reports Server (NTRS)

Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

2005-01-01

Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

18. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

NASA Technical Reports Server (NTRS)

Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

2007-01-01

This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

19. Deriving Arctic Cloud Microphysics at Barrow, Alaska. Algorithms, Results, and Radiative Closure

SciTech Connect

Shupe, Matthew D.; Turner, David D.; Zwink, Alexander; Thieman, Mandana M.; Mlawer, Eli J.; Shippert, Timothy

2015-07-01

Cloud phase and microphysical properties control the radiative effects of clouds in the climate system and are therefore crucial to characterize in a variety of conditions and locations. An Arctic-specific, ground-based, multi-sensor cloud retrieval system is described here and applied to two years of observations from Barrow, Alaska. Over these two years, clouds occurred 75% of the time, with cloud ice and liquid each occurring nearly 60% of the time. Liquid water occurred at least 25% of the time even in the winter, and existed up to heights of 8 km. The vertically integrated mass of liquid was typically larger than that of ice. While it is generally difficult to evaluate the overall uncertainty of a comprehensive cloud retrieval system of this type, radiative flux closure analyses were performed where flux calculations using the derived microphysical properties were compared to measurements at the surface and top-of-atmosphere. Radiative closure biases were generally smaller for cloudy scenes relative to clear skies, while the variability of flux closure results was only moderately larger than under clear skies. The best closure at the surface was obtained for liquid-containing clouds. Radiative closure results were compared to those based on a similar, yet simpler, cloud retrieval system. These comparisons demonstrated the importance of accurate cloud phase classification, and specifically the identification of liquid water, for determining radiative fluxes. Enhanced retrievals of liquid water path for thin clouds were also shown to improve radiative flux calculations.

20. Genetic algorithms

NASA Technical Reports Server (NTRS)

Wang, Lui; Bayer, Steven E.

1991-01-01

Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

1. The equation of state for stellar envelopes. II - Algorithm and selected results

NASA Technical Reports Server (NTRS)

Mihalas, Dimitri; Dappen, Werner; Hummer, D. G.

1988-01-01

A free-energy-minimization method for computing the dissociation and ionization equilibrium of a multicomponent gas is discussed. The adopted free energy includes terms representing the translational free energy of atoms, ions, and molecules; the internal free energy of particles with excited states; the free energy of a partially degenerate electron gas; and the configurational free energy from shielded Coulomb interactions among charged particles. Internal partition functions are truncated using an occupation probability formalism that accounts for perturbations of bound states by both neutral and charged perturbers. The entire theory is analytical and differentiable to all orders, so it is possible to write explicit analytical formulas for all derivatives required in a Newton-Raphson iteration; these are presented to facilitate future work. Some representative results for both Saha and free-energy-minimization equilibria are presented for a hydrogen-helium plasma with N(He)/N(H) = 0.10. These illustrate nicely the phenomena of pressure dissociation and ionization, and also demonstrate vividly the importance of choosing a reliable cutoff procedure for internal partition functions.

2. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

2015-11-01

The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

3. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, Basic Algorithm and Preliminary results

Jung, Joon-Hee; Arakawa, Akio

2010-04-01

A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D) multi-scale modeling framework (MMF), is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM). It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse-resolution 3D CRM

4. A Novel Numerical Algorithm of Numerov Type for 2D Quasi-linear Elliptic Boundary Value Problems

Mohanty, R. K.; Kumar, Ravindra

2014-11-01

In this article, using three function evaluations, we discuss a nine-point compact scheme of O(Δ y2 + Δ x4) based on Numerov-type discretization for the solution of 2D quasi-linear elliptic equations with given Dirichlet boundary conditions, where Δy > 0 and Δx > 0 are grid sizes in y- and x-directions, respectively. Iterative methods for diffusion-convection equation are discussed in detail. We use block iterative methods to solve the system of algebraic linear and nonlinear difference equations. Comparative results of some physical problems are given to illustrate the usefulness of the proposed method.

5. Topography and tectonics of the central New Madrid seismic zone: Results of numerical experiements using a three-dimensional boundary element program

NASA Technical Reports Server (NTRS)

Gomberg, Joan; Ellis, Michael

1994-01-01

We present results of a series of numerical experiments designed to test hypothetical mechanisms that derive deformation in the New Madrid seismic zone. Experiments are constrained by subtle topography and the distribution of seismicity in the region. We use a new boundary element algorithm that permits calcuation of the three-dimensional deformation field. Surface displacement fields are calculated for the New Madrid zone under both far-field (plate tectonics scale) and locally derived driving strains. Results demonstrate that surface displacement fields cannot distinguish between either a far-field simple or pure shear strain field or one that involves a deep shear zone beneath the upper crustal faults. Thus, neither geomorphic nor geodetic studies alone are expected to reveal the ultimate driving mechanism behind the present-day deformation. We have also tested hypotheses about strain accommodation within the New Madrid contractional step-over by including linking faults, two southwest dipping and one vertical, recently inferred from microearthquake data. Only those models with step-over faults are able to predict the observed topography. Surface displacement fields for long-term, relaxed deformation predict the distribution of uplift and subsidence in the contractional step-over remarkably well. Generation of these displacement fields appear to require slip on both the two northeast trending vertical faults and the two dipping faults in the step-over region, with very minor displacements occurring during the interseismic period when the northeast trending vertical faults are locked. These models suggest that the gently dippling central step-over fault is a reverse fault and that the steeper fault, extending to the southeast of the step-over, acts as a normal fault over the long term.

6. Finite-difference algorithms for the time-domain Maxwell's equations - A numerical approach to RCS analysis

NASA Technical Reports Server (NTRS)

Vinh, Hoang; Dwyer, Harry A.; Van Dam, C. P.

1992-01-01

The applications of two CFD-based finite-difference methods to computational electromagnetics are investigated. In the first method, the time-domain Maxwell's equations are solved using the explicit Lax-Wendroff scheme and in the second method, the second-order wave equations satisfying the Maxwell's equations are solved using the implicit Crank-Nicolson scheme. The governing equations are transformed to a generalized curvilinear coordinate system and solved on a body-conforming mesh using the scattered-field formulation. The induced surface current and the bistatic radar cross section are computed and the results are validated for several two-dimensional test cases involving perfectly-conducting scatterers submerged in transverse-magnetic plane waves.

7. Experimental results and numerical modeling of a high-performance large-scale cryopump. I. Test particle Monte Carlo simulation

SciTech Connect

Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos

2011-07-15

For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

8. The Trichoderma harzianum demon: complex speciation history resulting in coexistence of hypothetical biological species, recent agamospecies and numerous relict lineages

PubMed Central

2010-01-01

Background The mitosporic fungus Trichoderma harzianum (Hypocrea, Ascomycota, Hypocreales, Hypocreaceae) is an ubiquitous species in the environment with some strains commercially exploited for the biological control of plant pathogenic fungi. Although T. harzianum is asexual (or anamorphic), its sexual stage (or teleomorph) has been described as Hypocrea lixii. Since recombination would be an important issue for the efficacy of an agent of the biological control in the field, we investigated the phylogenetic structure of the species. Results Using DNA sequence data from three unlinked loci for each of 93 strains collected worldwide, we detected a complex speciation process revealing overlapping reproductively isolated biological species, recent agamospecies and numerous relict lineages with unresolved phylogenetic positions. Genealogical concordance and recombination analyses confirm the existence of two genetically isolated agamospecies including T. harzianum sensu stricto and two hypothetical holomorphic species related to but different from H. lixii. The exact phylogenetic position of the majority of strains was not resolved and therefore attributed to a diverse network of recombining strains conventionally called 'pseudoharzianum matrix'. Since H. lixii and T. harzianum are evidently genetically isolated, the anamorph - teleomorph combination comprising H. lixii/T. harzianum in one holomorph must be rejected in favor of two separate species. Conclusions Our data illustrate a complex speciation within H. lixii - T. harzianum species group, which is based on coexistence and interaction of organisms with different evolutionary histories and on the absence of strict genetic borders between them. PMID:20359347

9. Time-dependent thermocapillary convection in a Cartesian cavity - Numerical results for a moderate Prandtl number fluid

NASA Technical Reports Server (NTRS)

Peltier, L. J.; Biringen, S.

1993-01-01

The present numerical simulation explores a thermal-convective mechanism for oscillatory thermocapillary convection in a shallow Cartesian cavity for a Prandtl number 6.78 fluid. The computer program developed for this simulation integrates the two-dimensional, time-dependent Navier-Stokes equations and the energy equation by a time-accurate method on a stretched, staggered mesh. Flat free surfaces are assumed. The instability is shown to depend upon temporal coupling between large scale thermal structures within the flow field and the temperature sensitive free surface. A primary result of this study is the development of a stability diagram presenting the critical Marangoni number separating steady from the time-dependent flow states as a function of aspect ratio for the range of values between 2.3 and 3.8. Within this range, a minimum critical aspect ratio near 2.3 and a minimum critical Marangoni number near 20,000 are predicted below which steady convection is found.

10. Source contributions to PM2.5 in Guangdong province, China by numerical modeling: Results and implications

Yin, Xiaohong; Huang, Zhijiong; Zheng, Junyu; Yuan, Zibing; Zhu, Wenbo; Huang, Xiaobo; Chen, Duohong

2017-04-01

As one of the most populous and developed provinces in China, Guangdong province (GD) has been experiencing regional haze problems. Identification of source contributions to ambient PM2.5 level is essential for developing effective control strategies. In this study, using the most up-to-date emission inventory and validated numerical model, source contributions to ambient PM2.5 from eight emission source sectors (agriculture, biogenic, dust, industry, power plant, residential, mobile and others) in GD in 2012 were quantified. Results showed that mobile sources are the dominant contributors to the ambient PM2.5 (24.0%) in the Pearl River Delta (PRD) region, the central and most developed area of GD, while industry sources are the major contributors (21.5% 23.6%) to those in the Northeastern GD (NE-GD) region and the Southwestern GD (SW-GD) region. Although many industries have been encouraged to move from the central GD to peripheral areas such as NE-GD and SW-GD, their emissions still have an important impact on the PM2.5 level in the PRD. In addition, agriculture sources are responsible for 17.5% to ambient PM2.5 in GD, indicating the importance of regulations on agricultural activities, which has been largely ignored in the current air quality management. Super-regional contributions were also quantified and their contributions to the ambient PM2.5 in GD are significant with notable seasonal differences. But they might be overestimated and further studies are needed to better quantify the transport impacts.

11. A Study of The Eastern Mediterranean Hydrology and Circulation By Comparing Observation and High Resolution Numerical Model Results.

Alhammoud, B.; Béranger, K.; Mortier, L.; Crépon, M.

The Eastern Mediterranean hydrology and circulation are studied by comparing the results of a high resolution primitive equation model (described in dedicated session: Béranger et al.) with observations. The model has a horizontal grid mesh of 1/16o and 43 z-levels in the vertical. The model was initialized with the MODB5 climatology and has been forced during 11 years by the daily sea surface fluxes provided by the European Centre for Medium-range Weather Forecasts analysis in a perpetual year mode corresponding to the year March 1998-February 1999. At the end of the run, the numerical model is able to accurately reproduce the major water masses of the Eastern Mediterranean Basin (Levantine Surface Water, modi- fied Atlantic Water, Levantine Intermediate Water, and Eastern Mediterranean Deep Water). Comparisons with the POEM observations reveal good agreement. While the initial conditions of the model are somewhat different from POEM observations, dur- ing the last year of the simulation, we found that the water mass stratification matches that of the observations quite well in the seasonal mean. During the 11 years of simulation, the model drifts slightly in the layers below the thermocline. Nevertheless, many important physical processes were reproduced. One example is that the dispersal of Adriatic Deep Water into the Levantine Basin is rep- resented. In addition, convective activity located in the northern part of the Levantine Basin occurs in Spring as expected. The surface circulation is in agreement with in-situ and satellite observations. Some well known mesoscale features of the upper thermocline circulation are shown. Sea- sonal variability of transports through Sicily, Otranto and Cretan straits are inves- tigated as well. This work was supported by the french MERCATOR project and SHOM.

12. Numerical analysis of intensity signals resulting from genotyping pooled DNA samples in beef cattle and broiler chicken.

PubMed

Reverter, A; Henshall, J M; McCulloch, R; Sasazaki, S; Hawken, R; Lehnert, S A

2014-05-01

Pooled genomic DNA has been proposed as a cost-effective approach in genomewide association studies (GWAS). However, algorithms for genotype calling of biallelic SNP are not adequate with pooled DNA samples because they assume the presence of 2 fluorescent signals, 1 for each allele, and operate under the expectation that at most 2 copies of the variant allele can be found for any given SNP and DNA sample. We adapt analytical methodology from 2-channel gene expression microarray technology to SNP genotyping of pooled DNA samples. Using 5 datasets from beef cattle and broiler chicken of varying degrees of complexity in terms of design and phenotype, continuous and dichotomous, we show that both differential hybridization (M = green minus red intensity signal) and abundance (A = average of red and green intensities) provide useful information in the prediction of SNP allele frequencies. This is predominantly true when making inference about extreme SNP that are either nearly fixed or highly polymorphic. We propose the use of model-based clustering via mixtures of bivariate normal distributions as an optimal framework to capture the relationship between hybridization intensity and allele frequency from pooled DNA samples. The range of M and A values observed here are in agreement with those reported within the context of gene expression microarray and also with those from SNP array data within the context of analytical methodology for the identification of copy number variants. In particular, we confirm that highly polymorphic SNP yield a strong signal from both channels (red and green) while lowly or nonpolymorphic SNP yield a strong signal from 1 channel only. We further confirm that when the SNP allele frequencies are known, either because the individuals in the pools or from a closely related population are themselves genotyped, a multiple regression model with linear and quadratic components can be developed with high prediction accuracy. We conclude that when

13. Numerical analysis of wellbore integrity: results from a field study of a natural CO2 reservoir production well

Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.

2008-12-01

An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are

14. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

2016-10-01

The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

15. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

NASA Technical Reports Server (NTRS)

Knox, C. E.; Cannon, D. G.

1979-01-01

A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

16. Initialization of a Numerical Mesoscale Model with ALEXI-derived Volumetric Soil Moisture: Case Results and Validation

Mecikalski, J. R.; Hain, C. R.; Anderson, M. C.

2006-05-01

Soil moisture plays a vital role in the portioning of sensible and latent heat fluxes in the surface energy budget, although high spatial-resolution observations of it are quite rare. The ALEXI model contains the two-source land-surface representation of Norman et al. (1995), which partitions surface fluxes and radiometric temperature into canopy and soil contributions based on the fraction of vegetation cover within the scene. Anderson et al. (1997) and Mecikalski (1999) detail the implementation of ALEXI as a regional-scale application over the continental United States. This model relies on remote sensing data to operate, including GOES-derived surface brightness temperature changes, AVHRR-derived land cover properties, as well as synoptic weather data to operate (Mecikalski, 1999). This version of the ALEXI algorithm has been run daily on a 10 km resolution grid from the years 2002 to present. ALEXI diagnoses a fraction of potential evapotranspiration (fPET) for both the surface layer (0-5 cm) and root-zone (5-200 cm), given a calculation of the potential ET for each pixel. In current mesoscale modeling the fraction of potential ET can be directly related to a fraction of available water, which in turn can be used to calculate volumetric soil moisture for a given soil texture. Soil moisture conditions of the surface and root-zone yield a distinctive thermal signature, where moisture deficiency will lead to surfaces warming more quickly. Current land-surface models (LDAS, NLDAS) such as those used in the North American Mesoscale Model (NAM) use antecedent precipitation as the primary component to the calculation of volumetric soil moisture. These models use four layers in their soil model (0-10, 10-40, 40-100, and 100-200 cm), while ALEXI provides derived volumetric soil moisture for only two layers within the 0-200 cm depth. This discrepancy can be solved with a blending of the two layers from ALEXI to provide a reasonable representation of the observed

17. An open-loop ground-water heat pump system: transient numerical modeling and site experimental results

Lo Russo, S.; Taddia, G.; Gnavi, L.

2012-04-01

KEY WORDS: Open-loop ground water heat pump; Feflow; Low-enthalpy; Thermal Affected Zone; Turin; Italy The increasing diffusion of low-enthalpy geothermal open-loop Groundwater Heat Pumps (GWHP) providing buildings air conditioning requires a careful assessment of the overall effects on groundwater system, especially in the urban areas where several plants can be close together and interfere. One of the fundamental aspects in the realization of an open loop low-enthalpy geothermal system is therefore the capacity to forecast the effects of thermal alteration produced in the ground, induced by the geothermal system itself. The impact on the groundwater temperature in the surrounding area of the re-injection well (Thermal Affected Zone - TAZ) is directly linked to the aquifer properties. The transient dynamic of groundwater discharge and temperature variations should be also considered to assess the subsurface environmental effects of the plant. The experimental groundwater heat pump system used in this study is installed at the "Politecnico di Torino" (NW Italy, Piedmont Region). This plant provides summer cooling needs for the university buildings. This system is composed by a pumping well, a downgradient injection well and a control piezometer. The system is constantly monitored by multiparameter probes measuring the dynamic of groundwater temperature. A finite element subsurface flow and transport simulator (FEFLOW) was used to investigate the thermal aquifer alteration. Simulations were continuously performed during May-October 2010 (cooling period). The numerical simulation of the heat transport in the aquifer was solved with transient conditions. The simulation was performed by considering only the heat transfer within the saturated aquifer, without any heat dispersion above or below the saturated zone due to the lack of detailed information regarding the unsaturated zone. Model results were compared with experimental temperature data derived from groundwater

18. On the Improvement of Numerical Weather Prediction by Assimilation of Hub Height Wind Information in Convection-Resulted Models

Declair, Stefan; Stephan, Klaus; Potthast, Roland

2015-04-01

Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs). In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology strongly support the TSOs by developing innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key in the energy prediction process chain is the numerical weather prediction (NWP) system. With focus on wind energy, we face the model errors in the planetary boundary layer, which is characterized by strong spatial and temporal fluctuations in wind speed, to improve the basis of the weather dependent renewable energy prediction. Model data can be corrected by postprocessing techniques such as model output statistics and calibration using historical observational data. On the other hand, latest observations can be used in a preprocessing technique called data assimilation (DA). In DA, the model output from a previous time step is combined such with observational data, that the new model data for model integration initialization (analysis) fits best to the latest model data and the observational data as well. Therefore, model errors can be already reduced before the model integration. In this contribution, the results of an impact study are presented. A so-called OSSE (Observation Simulation System Experiment) is performed using the convective-resoluted COSMO-DE model of the German Weather Service and a 4D-DA technique, a Newtonian relaxation method also called nudging. Starting from a nature run (treated as the truth), conventional observations and artificial wind observations at hub height are generated. In a control run, the basic model setup of the nature run is slightly perturbed to drag the model away from the beforehand generated truth and a free forecast is computed based on the analysis using only conventional

19. Revised numerical wrapper for PIES code

Raburn, Daniel; Reiman, Allan; Monticello, Donald

2015-11-01

A revised external numerical wrapper has been developed for the Princeton Iterative Equilibrium Solver (PIES code), which is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present. The numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg-Marquardt backtracking algorithm. The details of the numerical wrapper and several sample results are presented.

20. An algorithm for the automatic synchronization of Omega receivers

NASA Technical Reports Server (NTRS)

Stonestreet, W. M.; Marzetta, T. L.

1977-01-01

The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

1. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena

2016-01-01

The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

2. Comparative Results of AIRS/AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

NASA Technical Reports Server (NTRS)

Susskind, Joel; Kouvaris, Louis; Iredell, Lena

2016-01-01

The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.

3. Thermodiffusion in concentrated ferrofluids: A review and current experimental and numerical results on non-magnetic thermodiffusion

SciTech Connect

Sprenger, Lisa Lange, Adrian; Odenbach, Stefan

2013-12-15

Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.

4. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

2015-12-01

This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

5. Programming the gradient projection algorithm

NASA Technical Reports Server (NTRS)

Hargrove, A.

1983-01-01

The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.

6. Frontiers in Numerical Relativity

Evans, Charles R.; Finn, Lee S.; Hobill, David W.

2011-06-01

Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

7. Investigating role of ice-ocean interaction on glacier dynamic: Results from numerical modeling applied to Petermann Glacier

Nick, F. M.; van der Veen, C. J.; Vieli, A.; Pattyn, F.; Hubbard, A.; Box, J. E.

2010-12-01

Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its ~17 km wide and ~ 60 km long floating ice-shelf is experiencing high rates of bottom melting. The recent partial disintegration of its shelf (in August 2010) presents a natural experiment to investigate the dynamic response of the ice sheet to its shelf retreat. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as shelf disintegration, bottom melting, sea ice or sikkusak disintegration and surface run off to the mass balance of Petermann Glacier and assess its stability. Our modeling study provides insights into the role of ice-ocean interaction, and on response of Petermann Glacier to its recent massive ice loss.

8. Role of ice-ocean interaction on glacier instability: Results from numerical modelling applied to Petermann Glacier

Nick, Faezeh M.; Hubbard, Alun; van der Veen, Kees; Vieli, Andreas

2010-05-01

Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its 16 km wide and 80 km long floating tongue, experiences massive bottom melting. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as bottom melting, sea ice or sikkusak disintegration, surface run off and iceberg calving to the mass balance and instability of Petermann Glacier and its ice shelf. Our modelling study provides insights into the role of ice-ocean interaction, and on how to incorporate calving in ice sheet models, improving our ability to predict future ice sheet change.

9. Role of ice-ocean interaction on glacier instability: Results from numerical modeling applied to Petermann Glacier (Invited)

Nick, F.; Hubbard, A.; Vieli, A.; van der Veen, C. J.; Box, J. E.; Bates, R.; Luckman, A. J.

2009-12-01

Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its 16 km wide and 80 km long floating tongue, experiences massive bottom melting. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as bottom melting, sea ice or sikkusak disintegration, surface run off and iceberg calving to the mass balance and instability of Petermann Glacier and its ice shelf. Our modeling study provides insights into the role of ice-ocean interaction, and on how to incorporate calving in ice sheet models, improving our ability to predict future ice sheet change.

10. Numerical simulations - Some results for the 2- and 3-D Hubbard models and a 2-D electron phonon model

NASA Technical Reports Server (NTRS)

Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.

1989-01-01

Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.

11. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

NASA Technical Reports Server (NTRS)

Burt, Adam O.; Tinker, Michael L.

2014-01-01

In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

12. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

SciTech Connect

Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

1985-05-01

A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

13. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

NASA Technical Reports Server (NTRS)

Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

1985-01-01

A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

14. Algorithms and Libraries

NASA Technical Reports Server (NTRS)

Dongarra, Jack

1998-01-01

This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

15. Preliminary numerical simulations of the 27 February 2010 Chile tsunami: first results and hints in a tsunami early warning perspective

Tinti, S.; Tonini, R.; Armigliato, A.; Zaniboni, F.; Pagnoni, G.; Gallazzi, Sara; Bressan, Lidia

2010-05-01

The tsunamigenic earthquake (M 8.8) that occurred offshore central Chile on 27 February 2010 can be classified as a typical subduction-zone earthquake. The effects of the ensuing tsunami have been devastating along the Chile coasts, and especially between the cities of Valparaiso and Talcahuano, and in the Juan Fernandez islands. The tsunami propagated across the entire Pacific Ocean, hitting with variable intensity almost all the coasts facing the basin. While the far-field propagation was quite well tracked almost in real-time by the warning centres and reasonably well reproduced by the forecast models, the toll of lives and the severity of the damage caused by the tsunami in the near-field occurred with no local alert nor warning and sadly confirms that the protection of the communities placed close to the tsunami sources is still an unresolved problem in the tsunami early warning field. The purpose of this study is two-fold. On one side we perform numerical simulations of the tsunami starting from different earthquake models which we built on the basis of the preliminary seismic parameters (location, magnitude and focal mechanism) made available by the seismological agencies immediately after the event, or retrieved from more detailed and refined studies published online in the following days and weeks. The comparison with the available records of both offshore DART buoys and coastal tide-gauges is used to put some preliminary constraints on the best-fitting fault model. The numerical simulations are performed by means of the finite-difference code UBO-TSUFD, developed and maintained by the Tsunami Research Team of the University of Bologna, Italy, which can solve both the linear and non-linear versions of the shallow-water equations on nested grids. The second purpose of this study is to use the conclusions drawn in the previous part in a tsunami early warning perspective. In the framework of the EU-funded project DEWS (Distant Early Warning System), we will

16. Damage evaluation on a multi-story framed structures: comparison of results retrieved from algorithms based on modal and non-modal parameters

Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria

2016-04-01

Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC

17. Quasi-Periodic Oscillations and Frequencies in AN Accretion Disk and Comparison with the Numerical Results from Non-Rotating Black Hole Computed by the Grh Code

Donmez, Orhan

The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in X-ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from Refs. 1 and 2 using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in numerical simulation of relativistic tori and spiral waves on the accretion disk. The results of these different dynamical structures on the accretion disk responsible for QPOs are discussed in detail.

18. ICPES analyses using full image spectra and astronomical data fitting algorithms to provide diagnostic and result information

SciTech Connect

Spencer, W.A.; Goode, S.R.

1997-10-01

ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.

19. Static correlations in macro-ionic suspensions: Analytic and numerical results in a hypernetted-chain-mean-spherical approximation

Khan, Sheema; Morton, Thomas L.; Ronis, David

1987-05-01

The static correlations in highly charged colloidal and micellar suspensions, with and without added electrolyte, are examined using the hypernetted-chain approximation (HNC) for the macro-ion-macro-ion correlations and the mean-spherical approximation for the other correlations. By taking the point-ion limit for the counter-ions, an analytic solution for the counter-ion part of the problem can be obtained; this maps the macro-ion part of the problem onto a one-component problem where the macro-ions interact via a screened Coulomb potential with the Gouy-Chapman form for the screening length and an effective charge that depends on the macro-ion-macro-ion pair correlations. Numerical solutions of the effective one-component equation in the HNC approximation are presented, and in particular, the effects of macro-ion charge, nonadditive core diameters, and added electrolyte are examined. As we show, there can be a strong renormalization of the effective macro-ion charge and reentrant melting in colloidal crystals.

20. Ephemeral liquid water at the surface of the martian North Polar Residual Cap: Results of numerical modelling

Losiak, Anna; Czechowski, Leszek; Velbel, Michael A.

2015-12-01

Gypsum, a mineral that requires water to form, is common on the surface of Mars. Most of it originated before 3.5 Gyr when the Red Planet was more humid than now. However, occurrences of gypsum dune deposits around the North Polar Residual Cap (NPRC) seem to be surprisingly young: late Amazonian in age. This shows that liquid water was present on Mars even at times when surface conditions were as cold and dry as the present-day. A recently proposed mechanism for gypsum formation involves weathering of dust within ice (e.g., Niles, P.B., Michalski, J. [2009]. Nat. Geosci. 2, 215-220.). However, none of the previous studies have determined if this process is possible under current martian conditions. Here, we use numerical modelling of heat transfer to show that during the warmest days of the summer, solar irradiation may be sufficient to melt pure water ice located below a layer of dark dust particles (albedo ⩽ 0.13) lying on the steepest sections of the equator-facing slopes of the spiral troughs within martian NPRC. During the times of high irradiance at the north pole (every 51 ka; caused by variation of orbital and rotational parameters of Mars e.g., Laskar, J. et al. [2002]. Nature 419, 375-377.) this process could have taken place over larger parts of the spiral troughs. The existence of small amounts of liquid water close to the surface, even under current martian conditions, fulfils one of the main requirements necessary to explain the formation of the extensive gypsum deposits around the NPRC. It also changes our understanding of the degree of current geological activity on Mars and has important implications for estimating the astrobiological potential of Mars.

1. Analysis of the global free infra-gravity wave climate for the SWOT mission, and preliminary results of numerical modelling

Rawat, A.; Aucan, J.; Ardhuin, F.

2012-12-01

All sea level variations of the order of 1 cm at scales under 30 km are of great interest for the future Surface Water Ocean Topography (SWOT) satellite mission. That satellite should provide high-resolution maps of the sea surface height for analysis of meso to sub-mesoscale currents, but that will require a filtering of all gravity wave motions in the data. Free infragravity waves (FIGWs) are generated and radiate offshore when swells and/or wind seas and their associated bound infragravity waves impact exposed coastlines. Free infragravity waves have dominant periods comprised between 1 and 10 minutes and horizontal wavelengths of up to tens of kilometers. Given the length scales of the infragravity waves wavelength and amplitude, the infragravity wave field will can a significant fraction the signal measured by the future SWOT mission. In this study, we analyze the data from recovered bottom pressure recorders of the Deep-ocean Assessment and Reporting of Tsunami (DART) program. This analysis includes data spanning several years between 2006 and 2010, from stations at different latitudes in the North and South Pacific, the North Atlantic, the Gulf of Mexico and the Caribbean Sea. We present and discuss the following conclusions: (1) The amplitude of free infragravity waves can reach several centimeters, higher than the precision sought for the SWOT mission. (2) The free infragravity signal is higher in the Eastern North Pacific than in the Western North Pacific, possibly due to smaller incident swell and seas impacting the nearby coastlines. (3) Free infragravity waves are higher in the North Pacific than in the North Atlantic, possibly owing to different average continental shelves configurations in the two basins. (4) There is a clear seasonal cycle at the high latitudes North Atlantic and Pacific stations that is much less pronounced or absent at the tropical stations, consistent with the generation mechanism of free infragravity waves. Our numerical model

2. Predicting regional emissions and near-field air concentrations of soil fumigants using modest numerical algorithms: a case study using 1,3-dichloropropene.

PubMed

Cryer, S A; van Wesenbeeck, I J; Knuteson, J A

2003-05-21

Soil fumigants, used to control nematodes and crop disease, can volatilize from the soil application zone and into the atmosphere to create the potential for human inhalation exposure. An objective for this work is to illustrate the ability of simple numerical models to correctly predict pesticide volatilization rates from agricultural fields and to expand emission predictions to nearby air concentrations for use in the exposure component of a risk assessment. This work focuses on a numerical system using two U.S. EPA models (PRZM3 and ISCST3) to predict regional volatilization and nearby air concentrations for the soil fumigant 1,3-dichloropropene. New approaches deal with links to regional databases, seamless coupling of emission and dispersion models, incorporation of Monte Carlo sampling techniques to account for parametric uncertainty, and model input sensitivity analysis. Predicted volatility flux profiles of 1,3-dichloropropene (1,3-D) from soil for tarped and untarped fields were compared against field data and used as source terms for ISCST3. PRZM3 can successfully estimate correct order of magnitude regional soil volatilization losses of 1,3-D when representative regional input parameters are used (soil, weather, chemical, and management practices). Estimated 1,3-D emission losses and resulting air concentrations were investigated for five geographically diverse regions. Air concentrations (15-day averages) are compared with the current U.S. EPA's criteria for human exposure and risk assessment to determine appropriate setback distances from treated fields. Sensitive input parameters for volatility losses were functions of the region being simulated.

3. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

Maglevanny, I. I.; Smolar, V. A.

2016-01-01

We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

4. Evaluation of ground-penetrating radar to detect free-phase hydrocarbons in fractured rocks - Results of numerical modeling and physical experiments

USGS Publications Warehouse

Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.

2000-01-01

The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons

5. Improved Antishock Air-Gap Control Algorithm with Acceleration Feedforward Control for High-Numerical Aperture Near-Field Storage System Using Solid Immersion Lens

Kim, Jung-Gon; Shin, Won-Ho; Hwang, Hyun-Woo; Jeong, Jun; Park, Kyoung-Su; Park, No-Cheol; Yang, Hyunseok; Park, Young-Pil; Moo Park, Jin; Son, Do Hyeon; Kyo Seo, Jeong; Choi, In Ho

2010-08-01

A near-field storage system using a solid immersion lens (SIL) has been studied as a high-density optical disc drive system. The major goal of this research is to improve the robustness of the air-gap controller for a SIL-based near-field recording (NFR) system against dynamic disturbances, such as external shocks. The servo system is essential in near-field (NF) technology because the nanogap distance between the SIL and the disc is 50 nm or less. Also, the air-gap distance must be maintained without collision between the SIL and the disc to detect a stable gap error and read-out signals when an external shock is applied. Therefore, we propose an improved air-gap control algorithm using only an acceleration feedforward controller (AFC) to maintain the air-gap distance without contact for a 4.48 G at 10 ms shock. Thus, the antishock control performance for the SIL-based NF storage system in the presence of external shocks is markedly improved. Furthermore, to enhance the performance of the antishock air-gap control, we use the AFC with a double disturbance observer and a dead-zone nonlinear controller. As a result, the air-gap distance is maintained without contact for a 6.56 G@10 ms shock.

6. Relaxation dynamics of Sierpinski hexagon fractal polymer: Exact analytical results in the Rouse-type approach and numerical results in the Zimm-type approach

Jurjiu, Aurel; Galiceanu, Mircea; Farcasanu, Alexandru; Chiriac, Liviu; Turcu, Flaviu

2016-12-01

In this paper, we focus on the relaxation dynamics of Sierpinski hexagon fractal polymer. The relaxation dynamics of this fractal polymer is investigated in the framework of the generalized Gaussian structure model using both Rouse and Zimm approaches. In the Rouse-type approach, by performing real-space renormalization transformations, we determine analytically the complete eigenvalue spectrum of the connectivity matrix. Based on the eigenvalues obtained through iterative algebraic relations we calculate the averaged monomer displacement and the mechanical relaxation moduli (storage modulus and loss modulus). The evaluation of the dynamical properties in the Rouse-type approach reveals that they obey scaling in the intermediate time/frequency domain. In the Zimm-type approach, which includes the hydrodynamic interactions, the relaxation quantities do not show scaling. The theoretical findings with respect to scaling in the intermediate domain of the relaxation quantities are well supported by experimental results.

7. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

2014-11-01

In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

8. The BR eigenvalue algorithm

SciTech Connect

Geist, G.A.; Howell, G.W.; Watkins, D.S.

1997-11-01

The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

9. Performance Comparison Of Evolutionary Algorithms For Image Clustering

Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

2014-09-01

Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

10. Numerical evaluation of cavitation shedding structure around 3D Hydrofoil: Comparison of PANS, LES and RANS results with experiments

Ji, B.; Peng, X. X.; Long, X. P.; Luo, X. W.; Wu, Y. L.

2015-12-01

Results of cavitating turbulent flow simulation around a twisted hydrofoil were presented in the paper using the Partially-Averaged Navier-Stokes (PANS) method (Ji et al. 2013a), Large-Eddy Simulation (LES) (Ji et al. 2013b) and Reynolds-Averaged Navier-Stokes (RANS). The results are compared with available experimental data (Foeth 2008). The PANS and LES reasonably reproduce the cavitation shedding patterns around the twisted hydrofoil with primary and secondary shedding, while the RANS model fails to simulate the unsteady cavitation shedding phenomenon and yields an almost steady flow with a constant cavity shape and vapor volume. Besides, it is noted that the predicted shedding vapor cavity by PANS is more turbulent and the shedding vortex is stronger than that by LES, which is more consistent with experimental photos.

11. Ion velocity distribution functions in argon and helium discharges: detailed comparison of numerical simulation results and experimental data

Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.

2017-02-01

Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.

12. Influence of the quantum well models on the numerical simulation of planar InGaN/GaN LED results

Podgórski, J.; Woźny, J.; Lisik, Z.

2016-04-01

Within this paper, we present electric model of a light emitting diode (LED) made of gallium nitride (GaN) followed by examples of simulation results obtained by means of Sentaurus software, which is the part of the TCAD package. The aim of this work is to answer the question of whether physical models of quantum wells used in commercial software are suitable for a correct analysis of the lateral LEDs made of GaN.

13. An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem

NASA Technical Reports Server (NTRS)

Ward, R. C.

1973-01-01

This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.

14. Results.

ERIC Educational Resources Information Center

Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

2001-01-01

Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

15. Algorithms and Algorithmic Languages.

ERIC Educational Resources Information Center

Veselov, V. M.; Koprov, V. M.

This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

16. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

NASA Technical Reports Server (NTRS)

Knox, C. E.; Cannon, D. G.

1980-01-01

A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

17. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

NASA Technical Reports Server (NTRS)

Knox, C. E.

1983-01-01

A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

18. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

SciTech Connect

Knox, C.E.

1983-03-01

A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

19. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

NASA Technical Reports Server (NTRS)

Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

2012-01-01

20. On the role of numerical simulations in studies of reduced gravity-induced physiological effects in humans. Results from NELME.

Perez-Poch, Antoni

Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numercial Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular archi-tecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electrical-like model of this control system, using inexpensive development frameworks, and has been tested and validated with the available experimental data. The objective of this work is to analyse and simulate long-term effects and gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairement which may put in jeopardy a long-term mission is also evaluated. . Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying continuosly from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobic ex-ercise and thermal stress simulating an extra

1. Reaction Matrix Calculations in Neutron Matter with Alternating-Layer-Spin Structure under π0 Condensation. II ---Numerical Results---

Tamiya, K.; Tamagaki, R.

1981-10-01

Results obtained by applying a formulation based on the reaction matrix theory developed in I are given. Calculations by making use of a modified realistic potential, the Reid soft-core potential with the OPEP-part enhanced due to the isobar (Δ)-mixing, show that the transition to the [ALS] phase of quasi-neutrons corresponding to a typical π0 condensation occurs in the region of (2 ˜ 3) times the nuclear density. The most important ingredients responsible for this transition are the growth of the attractive 3P2 + 3F2 contribution mainly from the spin-parallel pairs in the same leyers and the reduction of the repulsive 3P1 contribution mainly from the spin-antiparallel pairs in the nearest layers; these mainfest themselves as the [ALS]-type localization develops. Properties of the matter under the new phase thus obtained such as the shape of the Fermi surface and the effective mass are discussed.

2. Viscous effects in rapidly rotating stars with application to white-dwarf models. III - Further numerical results

NASA Technical Reports Server (NTRS)

Durisen, R. H.

1975-01-01

Improved viscous evolutionary sequences of differentially rotating, axisymmetric, nonmagnetic, zero-temperature white-dwarf models are constructed using the relativistically corrected degenerate electron viscosity. The results support the earlier conclusion that angular momentum transport due to viscosity does not lead to overall uniform rotation in many interesting cases. Qualitatively different behaviors are obtained, depending on how the total mass M and angular momentum J compare with the M and J values for which uniformly rotating models exist. Evolutions roughly determine the region in M and J for which models with a particular initial angular momentum distribution can reach carbon-ignition densities in 10 b.y. Such models may represent Type I supernova precursors.

3. Application of the dynamic model of Saeman to an industrial rotary kiln incinerator: numerical and experimental results.

PubMed

Ndiaye, L G; Caillat, S; Chinnayya, A; Gambier, D; Baudoin, B

2010-07-01

In order to simulate granular materials structure in a rotary kiln under the steady-state regime, a mathematical model has been developed by Saeman (1951). This model enables the calculation of the bed profiles, the axial velocity and solids flow rate along the kiln. This model can be coupled with a thermochemical model, in the case of a reacting moving bed. This dynamic model was used to calculate the bed profile for an industrial size kiln and the model projections were validated by measurements in a 4 m diameter by 16 m long industrial rotary kiln. The effect of rotation speed under solids bed profile and the effect of the feed rate under filling degree were established. On the basis of the calculations and the experimental results a phenomenological relation for the residence time estimation was proposed for the rotary kiln.

4. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

Won, Jihye; Park, Kwan-Dong

2015-04-01

Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

5. Dynamics of a Single Spin-1/2 Coupled to x- and y-Spin Baths: Algorithm and Results

Novotny, M. A.; Guerra, Marta L.; De Raedt, Hans; Michielsen, Kristel; Jin, Fengping

The real-time dynamics of a single spin-1/2 particle, called the central spin, coupled to the x(y)-components of the spins of one or more baths is simulated. The bath Hamiltonians contain interactions of x(y)-components of the bath spins only but are general otherwise. An efficient algorithm is described which allows solving the time-dependent Schr'odinger equation for the central spin, even if the x(y) baths contain hundreds of spins. The algorithm requires storage for 2 × 2 matrices only, no matter how many spins are in the baths. We calculate the expectation value of the central spin, as well as its von Neumann entropy S(t), the quantum purity P(t), and the off-diagonal elements of the quantum density matrix. In the case of coupling the central spin to both x- and y- baths the relaxation of S(t) and P(t) with time is a power law, compared to an exponential if the central spin is only coupled to an x-bath. The effect of different initial states for the central spin and bath is studied. Comparison with more general spin baths is also presented.

6. Numerical treatment of shocks in unsteady potential flow computation

Schippers, H.

1985-04-01

For moving shocks in unsteady transonic potential flow, an implicit fully-conservative finite-difference algorithm is presented. It is based on time-linearization and mass-flux splitting. For the one-dimensional problem of a traveling shock-wave, this algorithm is compared with the method of Goorjian and Shankar. The algorithm was implemented in the computer program TULIPS for the computation of transonic unsteady flow about airfoils. Numerical results for a pitching ONERA M6 airfoil are presented.

7. SLAC E155 and E155x Numeric Data Results and Data Plots: Nucleon Spin Structure Functions

DOE Data Explorer

The nucleon spin structure functions g1 and g2 are important tools for testing models of nucleon structure and QCD. Experiments at CERN, DESY, and SLAC have measured g1 and g2 using deep inelastic scattering of polarized leptons on polarized nucleon targets. The results of these experiments have established that the quark component of the nucleon helicity is much smaller than naive quark-parton model predictions. The Bjorken sum rule has been confirmed within the uncertainties of experiment and theory. The experiment E155 at SLAC collected data in March and April of 1997. Approximately 170 million scattered electron events were recorded to tape. (Along with several billion inclusive hadron events.) The data were collected using three independent fixed-angle magnetic spectrometers, at approximately 2.75, 5.5, and 10.5 degrees. The momentum acceptance of the 2.75 and 5.5 degree spectrometers ranged from 10 to 40 GeV, with momentum resolution of 2-4%. The 10.5 degree spectrometer, new for E155, accepted events of 7 GeV to 20 GeV. Each spectrometer used threshold gas Cerenkov counters (for particle ID), a segmented lead-glass calorimeter (for energy measurement and particle ID), and plastic scintillator hodoscopes (for tracking and momentum measurement). The polarized targets used for E155 were 15NH3 and 6LiD, as targets for measuring the proton and deuteron spin structure functions respectively. Experiment E155x recently concluded a successful two-month run at SLAC. The experiment was designed to measure the transverse spin structure functions of the proton and deuteron. The E155 target was also recently in use at TJNAF's Hall C (E93-026) and was returned to SLAC for E155x. E155x hopes to reduce the world data set errors on g2 by a factor of three. [Copied from http://www.slac.stanford.edu/exp/e155/e155_nickeltour.html, an information summary linked off the E155 home page at http://www.slac.stanford.edu/exp/e155/e155_home.html. The extension run, E155x, also makes

8. Computer code for scattering from impedance bodies of revolution. Part 3: Surface impedance with s and phi variation. Analytical and numerical results

NASA Technical Reports Server (NTRS)

Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.

1993-01-01

The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.

9. Massively Parallel Algorithms for Solution of Schrodinger Equation

NASA Technical Reports Server (NTRS)

Fijany, Amir; Barhen, Jacob; Toomerian, Nikzad

1994-01-01

In this paper massively parallel algorithms for solution of Schrodinger equation are developed. Our results clearly indicate that the Crank-Nicolson method, in addition to its excellent numerical properties, is also highly suitable for massively parallel computation.

10. Numerical Continuation of Hamiltonian Relative Periodic Orbits

Wulff, Claudia; Schebesch, Andreas

2008-08-01

The bifurcation theory and numerics of periodic orbits of general dynamical systems is well developed, and in recent years, there has been rapid progress in the development of a bifurcation theory for dynamical systems with structure, such as symmetry or symplecticity. But as yet, there are few results on the numerical computation of those bifurcations. The methods we present in this paper are a first step toward a systematic numerical analysis of generic bifurcations of Hamiltonian symmetric periodic orbits and relative periodic orbits (RPOs). First, we show how to numerically exploit spatio-temporal symmetries of Hamiltonian periodic orbits. Then we describe a general method for the numerical computation of RPOs persisting from periodic orbits in a symmetry breaking bifurcation. Finally, we present an algorithm for the numerical continuation of non-degenerate Hamiltonian relative periodic orbits with regular drift-momentum pair. Our path following algorithm is based on a multiple shooting algorithm for the numerical computation of periodic orbits via an adaptive Poincaré section and a tangential continuation method with implicit reparametrization. We apply our methods to continue the famous figure eight choreography of the three-body system. We find a relative period doubling bifurcation of the planar rotating eight family and compute the rotating choreographies bifurcating from it.

11. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

NASA Technical Reports Server (NTRS)

Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

2003-01-01

A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

12. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

PubMed Central

2011-01-01

Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549

13. Assessment of the improvements in accuracy of aerosol characterization resulted from additions of polarimetric measurements to intensity-only observations using GRASP algorithm (Invited)

Dubovik, O.; Litvinov, P.; Lapyonok, T.; Herman, M.; Fedorenko, A.; Lopatin, A.; Goloub, P.; Ducos, F.; Aspetsberger, M.; Planer, W.; Federspiel, C.

2013-12-01

During last few years we were developing GRASP (Generalized Retrieval of Aerosol and Surface Properties) algorithm designed for the enhanced characterization of aerosol properties from spectral, multi-angular polarimetric remote sensing observations. The concept of GRASP essentially relies on the accumulated positive research heritage from previous remote sensing aerosol retrieval developments, in particular those from the AERONET and POLDER retrieval activities. The details of the algorithm are described by Dubovik et al. (Atmos. Meas. Tech., 4, 975-1018, 2011). The GRASP retrieves properties of both aerosol and land surface reflectance in cloud-free environments. It is based on highly advanced statistically optimized fitting and deduces nearly 50 unknowns for each observed site. The algorithm derives a similar set of aerosol parameters as AERONET including detailed particle size distribution, the spectrally dependent the complex index of refraction and the fraction of non-spherical particles. The algorithm uses detailed aerosol and surface models and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are done on-line without using traditional look-up tables. In addition, the algorithm uses the new multi-pixel retrieval concept - a simultaneous fitting of a large group of pixels with additional constraints limiting the time variability of surface properties and spatial variability of aerosol properties. This principle is expected to result in higher consistency and accuracy of aerosol products compare to conventional approaches especially over bright surfaces where information content of satellite observations in respect to aerosol properties is limited. The GRASP is a highly versatile algorithm that allows input from both satellite and ground-based measurements. It also has essential flexibility in measurement processing. For example, if observation data set includes spectral

14. On the energy dependence of the radial diffusion coefficient and spectra of inner radiation belt particles - Analytic solutions and comparison with numerical results

NASA Technical Reports Server (NTRS)

Westphalen, H.; Spjeldvik, W. N.

1982-01-01

A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.

15. 10,000-fold concentration increase in proteins in a cascade microchip using anionic ITP by a 3-D numerical simulation with experimental results.

PubMed

Bottenus, Danny; Jubery, Talukder Zaki; Dutta, Prashanta; Ivory, Cornelius F

2011-02-01

This paper describes both the experimental application and 3-D numerical simulation of isotachophoresis (ITP) in a 3.2 cm long "cascade" poly(methyl methacrylate) (PMMA) microfluidic chip. The microchip includes 10 × reductions in both the width and depth of the microchannel, which decreases the overall cross-sectional area by a factor of 100 between the inlet (cathode) and outlet (anode). A 3-D numerical simulation of ITP is outlined and is a first example of an ITP simulation in three dimensions. The 3-D numerical simulation uses COMSOL Multiphysics v4.0a to concentrate two generic proteins and monitor protein migration through the microchannel. In performing an ITP simulation on this microchip platform, we observe an increase in concentration by over a factor of more than 10,000 due to the combination of ITP stacking and the reduction in cross-sectional area. Two fluorescent proteins, green fluorescent protein and R-phycoerythrin, were used to experimentally visualize ITP through the fabricated microfluidic chip. The initial concentration of each protein in the sample was 1.995 μg/mL and, after preconcentration by ITP, the final concentrations of the two fluorescent proteins were 32.57 ± 3.63 and 22.81 ± 4.61 mg/mL, respectively. Thus, experimentally the two fluorescent proteins were concentrated by over a factor of 10,000 and show good qualitative agreement with our simulation results.

16. Jamming cancellation algorithm for wideband imaging radar

Zheng, Yibin; Yu, Kai-Bor

1998-10-01

We describe a jamming cancellation algorithm for wide-band imaging radar. After reviewing high range resolution imaging principle, several key factors affecting jamming cancellation performances, such as the 'instantaneous narrow-band' assumption, bandwidth, de-chirped interference, are formulated and analyzed. Some numerical simulation results, using a hypothetical phased array radar and synthetic point targets, are presented. The results demonstrated the effectiveness of the proposed algorithm.

17. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

PubMed

Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

2015-01-01

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

18. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

PubMed Central

Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

2015-01-01

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

19. SVM-based multimodal classification of activities of daily living in Health Smart Homes: sensors, algorithms, and first experimental results.

PubMed

Fleury, Anthony; Vacher, Michel; Noury, Norbert

2010-03-01

By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data.

20. Numerical simulation of steady supersonic flow. [spatial marching

NASA Technical Reports Server (NTRS)

Schiff, L. B.; Steger, J. L.

1981-01-01

A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

1. The influence of variability of calculation grids on the results of numerical modeling of geothermal doublets - an example from the Choszczno area, north-western Poland

Wachowicz-Pyzik, A.; Sowiżdżał, A.; Pająk, L.

2016-09-01

The numerical modeling enables us to reduce the risk related to the selection of best localization of wells. Moreover, at the stage of production, modeling is a suitable tool for optimization of well operational parameters, which guarantees the long life of doublets. The thorough selection of software together with relevant methodology applied to generation of numerical models significantly improve the quality of obtained results. In the following paper, we discuss the impact of density of calculation grid on the results of geothermal doublet simulation with the TOUGH2 code, which applies the finite-difference method. The study area is located between the Szczecin Trough and the Fore-sudetic Monocline, where the Choszczno IG-1 well has been completed. Our research was divided into the two stages. At the first stage, we examined the changes of density of polygon calculation grids used in computations of operational parameters of geothermal doublets. At the second stage, we analyzed the influence of distance between the production and the injection wells on variability in time of operational parameters. The results demonstrated that in both studied cases, the largest differences occurred in pressures measured in production and injection wells whereas the differences in temperatures were less pronounced.

2. Simplified method for numerical modeling of fiber lasers.

PubMed

Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

2014-12-29

A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

3. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

PubMed Central

2014-01-01

A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

4. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

PubMed

2014-01-01

A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

5. High order hybrid numerical simulations of two dimensional detonation waves

NASA Technical Reports Server (NTRS)

Cai, Wei

1993-01-01

In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.

6. Numerical Relativity

NASA Technical Reports Server (NTRS)

Baker, John G.

2009-01-01

Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

7. Adaptive phase aberration correction based on imperialist competitive algorithm.

PubMed

Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

2014-01-01

We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.

8. Simultaneous Laser Raman-rayleigh-lif Measurements and Numerical Modeling Results of a Lifted Turbulent H2/N2 Jet Flame in a Vitiated Coflow

NASA Technical Reports Server (NTRS)

Cabra, R.; Chen, J. Y.; Dibble, R. W.; Myhrvold, T.; Karpetis, A. N.; Barlow, R. S.

2002-01-01

An experiment and numerical investigation is presented of a lifted turbulent H2/N2 jet flame in a coflow of hot, vitiated gases. The vitiated coflow burner emulates the coupling of turbulent mixing and chemical kinetics exemplary of the reacting flow in the recirculation region of advanced combustors. It also simplifies numerical investigation of this coupled problem by removing the complexity of recirculating flow. Scalar measurements are reported for a lifted turbulent jet flame of H2/N2 (Re = 23,600, H/d = 10) in a coflow of hot combustion products from a lean H2/Air flame ((empty set) = 0.25, T = 1,045 K). The combination of Rayleigh scattering, Raman scattering, and laser-induced fluorescence is used to obtain simultaneous measurements of temperature and concentrations of the major species, OH, and NO. The data attest to the success of the experimental design in providing a uniform vitiated coflow throughout the entire test region. Two combustion models (PDF: joint scalar Probability Density Function and EDC: Eddy Dissipation Concept) are used in conjunction with various turbulence models to predict the lift-off height (H(sub PDF)/d = 7,H(sub EDC)/d = 8.5). Kalghatgi's classic phenomenological theory, which is based on scaling arguments, yields a reasonably accurate prediction (H(sub K)/d = 11.4) of the lift-off height for the present flame. The vitiated coflow admits the possibility of auto-ignition of mixed fluid, and the success of the present parabolic implementation of the PDF model in predicting a stable lifted flame is attributable to such ignition. The measurements indicate a thickened turbulent reaction zone at the flame base. Experimental results and numerical investigations support the plausibility of turbulent premixed flame propagation by small scale (on the order of the flame thickness) recirculation and mixing of hot products into reactants and subsequent rapid ignition of the mixture.

9. Effectiveness of Ventricular Intrinsic Preference (VIP™) and Ventricular AutoCapture (VAC) algorithms in pacemaker patients: Results of the validate study

PubMed Central

Yadav, Rakesh; Jaswal, Aparna; Chennapragada, Sridevi; Kamath, Prakash; Hiremath, Shirish M.S.; Kahali, Dhiman; Anand, Sumit; Sood, Naresh K.; Mishra, Anil; Makkar, Jitendra S.; Kaul, Upendra

2015-01-01

Background Several past clinical studies have demonstrated that frequent and unnecessary right ventricular pacing in patients with sick sinus syndrome and compromised atrio-ventricular conduction (AVC) produces long-term adverse effects. The safety and efficacy of two pacemaker algorithms, Ventricular Intrinsic Preference™ (VIP) and Ventricular AutoCapture (VAC), were evaluated in a multi-center study in pacemaker patients. Methods We evaluated 80 patients across 10 centers in India. Patients were enrolled within 15 days of dual chamber pacemaker (DDDR) implantation, and within 45 days thereafter were classified to either a compromised AVC (cAVC) arm or an intact AVC (iAVC) arm based on intrinsic paced/sensed (AV/PV) delays. In each arm, patients were then randomized (1:1) into the following groups: VIP OFF and VAC OFF (Control group; CG), or VIP ON and VAC ON (Treatment Group; TG). Subsequently, the AV/PV delays in the CG groups were mandatorily programmed at 180/150 ms, and to up to 350 ms in the TG groups. The percentage of right ventricular pacing (%RVp) evaluated at 12-month post-implantation follow-ups were compared between the two groups in each arm. Additionally, in-clinic time required for collecting device data was compared between patients programmed with the automated AutoCapture algorithm activated (VAC ON) vs. the manually programmed method (VAC OFF). Results Patients randomized to the TG with the VIP algorithm activated exhibited a significantly lower %RVp at 12 months than those in the CG in both the cAVC arm (39±41% vs. 97±3%; p=0.0004) and the iAVC arm (15±25% vs. 68±39%; p=0.0067). In-clinic time required to collect device data was less in patients with the VAC algorithm activated. No device-related adverse events were reported during the year-long study period. Conclusions In our study cohort, the use of the VIP algorithm significantly reduced the %RVp, while the VAC algorithm reduced in-clinic time needed to collect device data. PMID

10. Algorithm for in-flight gyroscope calibration

NASA Technical Reports Server (NTRS)

Davenport, P. B.; Welter, G. L.

1988-01-01

An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

11. Comprehensive eye evaluation algorithm

Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

2016-03-01

In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

12. Response of major Greenland outlet glaciers to oceanic and atmospheric forcing: Results from numerical modeling on Petermann, Jakobshavn and Helheim Glacier.

Nick, F. M.; Vieli, A.; Pattyn, F.; Van de Wal, R.

2011-12-01

Oceanic forcing has been suggested as a major trigger for dynamic changes of Greenland outlet glaciers. Significant melting near their calving front or beneath the floating tongue and reduced support from sea ice or ice melange in front of their calving front can result in retreat of the terminus or the grounding line, and an increase in calving activities. Depending on the geometry and basal topography of the glacier, these oceanic forcing can affect the glacier dynamic differently. Here, we carry out a comparison study between three major outlet glaciers in Greenland and investigate the impact of a warmer ocean on glacier dynamics and ice discharge. We present results from a numerical ice-flow model applied to Petermann Glacier in the north, Jakobshavn Glacier in the west, and Helheim Glacier in the southeast of Greenland.

13. The measurement of enhancement in mathematical abilities as a result of joint cognitive trainings in numerical and visual- spatial skills: A preliminary study

Agus, M.; Mascia, M. L.; Fastame, M. C.; Melis, V.; Pilloni, M. C.; Penna, M. P.

2015-02-01

A body of literature shows the significant role of visual-spatial skills played in the improvement of mathematical skills in the primary school. The main goal of the current study was to investigate the impact of a combined visuo-spatial and mathematical training on the improvement of mathematical skills in 146 second graders of several schools located in Italy. Participants were presented single pencil-and-paper visuo-spatial or mathematical trainings, computerised version of the above mentioned treatments, as well as a combined version of computer-assisted and pencil-and-paper visuo-spatial and mathematical trainings, respectively. Experimental groups were presented with training for 3 months, once a week. All children were treated collectively both in computer-assisted or pencil-and-paper modalities. At pre and post-test all our participants were presented with a battery of objective tests assessing numerical and visuo-spatial abilities. Our results suggest the positive effect of different types of training for the empowerment of visuo-spatial and numerical abilities. Specifically, the combination of computerised and pencil-and-paper versions of visuo-spatial and mathematical trainings are more effective than the single execution of the software or of the pencil-and-paper treatment.

14. Haplotyping algorithms

SciTech Connect

Sobel, E.; Lange, K.; OConnell, J.R.

1996-12-31

Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

15. Introduction to Numerical Methods

SciTech Connect

Schoonover, Joseph A.

2016-06-14

These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

16. Development of a system for the numerical simulation of Euler flows, with results of preliminary 3-D propeller-slipstream/exhaust-jet calculations

Boerstoel, J. W.

1988-01-01

The current status of a computer program system for the numerical simulation of Euler flows is presented. Preliminary test calculation results are shown. They concern the three-dimensional flow around a wing-nacelle-propeller-outlet configuration. The system is constructed to execute four major tasks: block decomposition of the flow domain around given, possibly complex, three-dimensional aerodynamic surfaces; grid generation on the blocked flow domain; Euler-flow simulation on the blocked grid; and graphical visualization of the computed flow on the blocked grid, and postprocessing. The system consists of about 20 codes interfaced by files. Most of the required tasks can be executed. The geometry of complex aerodynamic surfaces in three-dimensional space can be handled. The validation test showed that the system must be improved to increase the speed of the grid generation process.

17. Numerical recipes for mold filling simulation

SciTech Connect

Kothe, D.; Juric, D.; Lam, K.; Lally, B.

1998-07-01

Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

18. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

NASA Technical Reports Server (NTRS)

Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

2010-01-01

Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

19. Long Term Maturation of Congenital Diaphragmatic Hernia Treatment Results: Toward Development of a Severity-Specific Treatment Algorithm

PubMed Central

Kays, David W.; Islam, Saleem; Larson, Shawn D.; Perkins, Joy; Talbert, James L.

2015-01-01

Objective To assess the impact of varying approaches to CDH repair timing on survival and need for ECMO when controlled for anatomic and physiologic disease severity in a large consecutive series of CDH patients. Summary Background Data Our publication of 60 consecutive CDH patients in 1999 showed that survival is significantly improved by limiting lung inflation pressures and eliminating hyperventilation. Methods We retrospectively reviewed 268 consecutive CDH patients, combining 208 new patients with the 60 previously reported. Management and ventilator strategy were highly consistent throughout. Varying approaches to surgical timing were applied as the series matured. Results Patients with anatomically less-severe left liver-down CDH had significantly increased need for ECMO if repaired in the first 48 hours, while patients with more-severe left liver-up CDH survived at a higher rate when repair was performed before ECMO. Overall survival of 268 patients was 78%. For those without lethal associated anomalies, survival was 88%. Of these, 99% of left liver-down CDH survived, 91% of right CDH survived. and 76% of left liver-up CDH survived. Conclusions This study shows that patients with anatomically less severe CDH benefit from delayed surgery while patients with anatomically more severe CDH may benefit from a more aggressive surgical approach. These findings show that patients respond differently across the CDH anatomic severity spectrum, and lay the foundation for the development of risk specific treatment protocols for patients with CDH. PMID:23989050

20. Use of borehole radar reflection logging to monitor steam-enhanced remediation in fractured limestone-results of numerical modelling and a field experiment

USGS Publications Warehouse

Gregoire, C.; Joesten, P.K.; Lane, J.W.

2006-01-01

Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature

1. Flow Matching Results of an MHD Energy Bypass System on a Supersonic Turbojet Engine Using the Numerical Propulsion System Simulation (NPSS) Environment

NASA Technical Reports Server (NTRS)

Benyo, Theresa L.

2011-01-01

Flow matching has been successfully achieved for an MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment helped perform a thermodynamic cycle analysis to properly match the flows from an inlet employing a MHD energy bypass system (consisting of an MHD generator and MHD accelerator) on a supersonic turbojet engine. Working with various operating conditions (such as the applied magnetic field, MHD generator length and flow conductivity), interfacing studies were conducted between the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis. This paper further describes the analysis of a supersonic turbojet engine with an MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to a range of 0 to 7.0 Mach with specific net thrust range of 740 N-s/kg (at ambient Mach = 3.25) to 70 N-s/kg (at ambient Mach = 7). These results were achieved with an applied magnetic field of 2.5 Tesla and conductivity levels in a range from 2 mhos/m (ambient Mach = 7) to 5.5 mhos/m (ambient Mach = 3.5) for an MHD generator length of 3 m.

2. Numerical methods in control

Mehrmann, Volker; Xu, Hongguo

2000-11-01

We study classical control problems like pole assignment, stabilization, linear quadratic control and H[infinity] control from a numerical analysis point of view. We present several examples that show the difficulties with classical approaches and suggest reformulations of the problems in a more general framework. We also discuss some new algorithmic approaches.

3. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

NASA Technical Reports Server (NTRS)

Straeter, T. A.; Markos, A. T.

1975-01-01

A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

4. In Praise of Numerical Computation

Yap, Chee K.

Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

5. Initial Flow Matching Results of MHD Energy Bypass on a Supersonic Turbojet Engine Using the Numerical Propulsion System Simulation (NPSS) Environment

NASA Technical Reports Server (NTRS)

Benyo, Theresa L.

2010-01-01

Preliminary flow matching has been demonstrated for a MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment was used to perform a thermodynamic cycle analysis to properly match the flows from an inlet to a MHD generator and from the exit of a supersonic turbojet to a MHD accelerator. Working with various operating conditions such as the enthalpy extraction ratio and isentropic efficiency of the MHD generator and MHD accelerator, interfacing studies were conducted between the pre-ionizers, the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis and describes the NPSS analysis of a supersonic turbojet engine with a MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to an explored and desired range of 0 to 7.0 Mach.

6. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

PubMed

Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

2016-05-01

The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.

7. Distribution of Groundwater Ages at Public-Supply Wells: Comparison of Results from Lumped Parameter and Numerical Inverse Models with Multiple Environmental Tracers

Eberts, S.; Bohlke, J. K.

2009-12-01

Estimates of groundwater age distributions at public-supply wells can provide insight into the vulnerability of these wells to contamination. Such estimates can be used to explore past and future water-quality trends and contaminant peak concentrations when combined with information on contaminant input at the water table. Information on groundwater age distributions, however, is not routinely applied to water quality issues at public-supply wells. This may be due, in part, to the difficulty in obtaining such estimates from poorly characterized aquifers with limited environmental tracer data. To this end, we compared distributions of groundwater ages in discharge from public-supply wells estimated from age tracer data (SF6, CFCs, 3H, 3He) using two different inverse modeling approaches: relatively simple lumped parameter models and more complex distributed-parameter numerical flow models with particle tracking. These comparisons were made in four contrasting hydrogeologic settings across the United States: unconsolidated alluvial fan sediments, layered confined unconsolidated sediments, unconsolidated valley-fill sediments, and carbonate rocks. In all instances, multiple age tracer measurements for the public-supply well of interest were available. We compared the following quantities, which were derived from simulated breakthrough curves that were generated using the various estimated age distributions for the selected wells and assuming the same hypothetical contaminant input: time lag to peak concentration, dilution at peak concentration, and contaminant arrival and flush times. Apparent tracer-based ages and mean and median simulated ages also were compared. For each setting, both types of models yielded similar age distributions and concentration trends, when based on similar conceptual models of local hydrogeology and calibrated to the same tracer measurements. Results indicate carefully chosen and calibrated simple lumped parameter age distribution models

8. Numerical analysis of incompressible viscous flow around a bubble

Sugano, Minoru; Ishii, Ryuji; Morioka, Shigeki

1992-12-01

A numerical simulation of flows around a deformable gas bubble rising through an incompressible viscous fluid is carried out on a supercomputer Fujitsu VP-2600 at the Data Processing Center of Kyoto University. The solution algorithm is a modified MAC (Marker And Cell) method. For the grid generation, an orthogonal mapping proposed by Ryskin and Leal is applied. The numerical results are compared with Ryskin and Leal's results and previous experiments. It will be shown that a good agreement is obtained between them.

9. Numerical simulation of photoexcited polaron states in water

SciTech Connect

Zemlyanaya, E. V. Volokhova, A. V.; Amirkhanov, I. V.; Puzynin, I. V.; Puzynina, T. P.; Rikhvitskiy, V. S.; Lakhno, V. D.; Atanasova, P. Kh.

2015-10-28

We consider the dynamic polaron model of the hydrated electron state on the basis of a system of three nonlinear partial differential equations with appropriate initial and boundary conditions. A parallel numerical algorithm for the numerical solution of this system has been developed. Its effectiveness has been tested on a few multi-processor systems. A numerical simulation of the polaron states formation in water under the action of the ultraviolet range laser irradiation has been performed. The numerical results are shown to be in a reasonable agreement with experimental data and theoretical predictions.

10. Modelling of radiative transfer by the Monte Carlo method and solving the inverse problem based on a genetic algorithm according to experimental results of aerosol sensing on short paths using a femtosecond laser source

SciTech Connect

Matvienko, G G; Oshlakov, V K; Sukhanov, A Ya; Stepanov, A N

2015-02-28

We consider the algorithms that implement a broadband ('multiwave') radiative transfer with allowance for multiple (aerosol) scattering and absorption by main atmospheric gases. In the spectral range of 0.6 – 1 μm, a closed numerical simulation of modifications of the supercontinuum component of a probing femtosecond pulse is performed. In the framework of the algorithms for solving the inverse atmospheric-optics problems with the help of a genetic algorithm, we give an interpretation of the experimental backscattered spectrum of the supercontinuum. An adequate reconstruction of the distribution mode for the particles of artificial aerosol with the narrow-modal distributions in a size range of 0.5 – 2 mm and a step of 0.5 mm is obtained. (light scattering)

SciTech Connect

Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

1995-09-01

This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

12. Numerical simulation of precipitation formation in the case orographically induced convective cloud: Comparison of the results of bin and bulk microphysical schemes

Sarkadi, N.; Geresdi, I.; Thompson, G.

2016-11-01

In this study, results of bulk and bin microphysical schemes are compared in the case of idealized simulations of pre-frontal orographic clouds with enhanced embedded convection. The description graupel formation by intensive riming of snowflakes was improved compared to prior versions of each scheme. Two methods of graupel melting coincident with collisions with water drops were considered: (1) all simulated melting and collected water drops increase the amount of melted water on the surface of graupel particles with no shedding permitted; (2) also no shedding permitted due to melting, but the collision with the water drops can induce shedding from the surface of the graupel particles. The results of the numerical experiments show: (i) The bin schemes generate graupel particles more efficiently by riming than the bulk scheme does; the intense riming of snowflakes was the most dominant process for the graupel formation. (ii) The collision-induced shedding significantly affects the evolution of the size distribution of graupel particles and water drops below the melting level. (iii) The three microphysical schemes gave similar values for the domain integrated surface precipitation, but the patterns reveal meaningful differences. (iv) Sensitivity tests using the bulk scheme show that the depth of the melting layer is sensitive to the description of the terminal velocity of the melting snow. (v) Comparisons against Convair-580 flight measurements suggest that the bin schemes simulate well the evolution of the pristine ice particles and liquid drops, while some inaccuracy can occur in the description of snowflakes riming. (vi) The bin scheme with collision-induced shedding reproduced well the quantitative characteristics of the observed bright band.

13. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

Brown, David A.

New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

14. Fast Numerical Methods for Stochastic Partial Differential Equations

DTIC Science & Technology

2016-04-15

uncertainty quantification. In the last decade much progress has been made in the construction of numerical algorithms to efficiently solve SPDES with...applicable SPDES with efficient numerical methods. This project is intended to address the numerical analysis as well as algorithm aspects of SPDES. Three...differential equations. Our work contains algorithm constructions, rigorous error analysis, and extensive numerical experiments to demonstrate our algorithm

15. Spherical Harmonic-based Random Fields Based on Real Particle 3D Data: Improved Numerical Algorithm and Quantitative Comparison to Real Particles

SciTech Connect

X Liu; E Garboczi; m Grigoriu; Y Lu; S Erdogan

2011-12-31

Many parameters affect the cyclone efficiency, and these parameters can have different effects in different flow regimes. Therefore the maximum-efficiency cyclone length is a function of the specific geometry and operating conditions in use. In this study, we obtained a relationship describing the minimum particle diameter or maximum cyclone efficiency by using a theoretical approach based on cyclone geometry and fluid properties. We have compared the empirical predictions with corresponding literature data and observed good agreement. The results address the importance of fluid properties. Inlet and vortex finder cross-sections, cone-apex diameter, inlet Reynolds number and surface roughness are found to be the other important parameters affecting cyclone height. The surface friction coefficient, on the other hand, is difficult to employ in the calculations.We developed a theoretical approach to find the maximum-efficiency heights for cyclones with tangential inlet and we suggested a relation for this height as a function of cyclone geometry and operating parameters. In order to generalize use of the relation, two dimensionless parameters, namely for geometric and operational variables, we defined and results were presented in graphical form such that one can calculate and enter the values of these dimensionless parameters and then can find the maximum efficiency height of his own specific cyclone.

16. Dynamics of plume-triple junction interaction: Results from a series of three-dimensional numerical models and implications for the formation of oceanic plateaus

2016-03-01

Mantle plumes rising in the vicinity of mid-ocean ridges often generate anomalies in melt production and seafloor depth. This study investigates the dynamical interactions between a mantle plume and a ridge-ridge-ridge triple junction, using a parameter space approach and a suite of steady state, three-dimensional finite element numerical models. The top domain boundary is composed of three diverging plates, with each assigned half-spreading rates with respect to a fixed triple junction point. The bottom boundary is kept at a constant temperature of 1350°C except where a two-dimensional, Gaussian-shaped thermal anomaly simulating a plume is imposed. Models vary plume diameter, plume location, the viscosity contrast between plume and ambient mantle material, and the use of dehydration rheology in calculating viscosity. Importantly, the model results quantify how plume-related anomalies in mantle temperature pattern, seafloor depth, and crustal thickness depend on the specific set of parameters. To provide an example, one way of assessing the effect of conduit position is to calculate normalized area, defined to be the spatial dispersion of a given plume at specific depth (here selected to be 50 km) divided by the area occupied by the same plume when it is located under the triple junction. For one particular case modeled where the plume is centered in an intraplate position 100 km from the triple junction, normalized area is just 55%. Overall, these models provide a framework for better understanding plateau formation at triple junctions in the natural setting and a tool for constraining subsurface geodynamical processes and plume properties.

17. Fast prediction of pulsed nonlinear acoustic fields from clinically relevant sources using time-averaged wave envelope approach: comparison of numerical simulations and experimental results.

PubMed

Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A

2008-12-01

The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters

18. Numerical integration using Wang Landau sampling

Li, Y. W.; Wüst, T.; Landau, D. P.; Lin, H. Q.

2007-09-01

We report a new application of Wang-Landau sampling to numerical integration that is straightforward to implement. It is applicable to a wide variety of integrals without restrictions and is readily generalized to higher-dimensional problems. The feasibility of the method results from a reinterpretation of the density of states in statistical physics to an appropriate measure for numerical integration. The properties of this algorithm as a new kind of Monte Carlo integration scheme are investigated with some simple integrals, and a potential application of the method is illustrated by the evaluation of integrals arising in perturbation theory of quantum many-body systems.

19. Highly uniform parallel microfabrication using a large numerical aperture system

Zhang, Zi-Yu; Zhang, Chen-Chu; Hu, Yan-Lei; Wang, Chao-Wei; Li, Jia-Wen; Su, Ya-Hui; Chu, Jia-Ru; Wu, Dong

2016-07-01

In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ˜75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallel processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.

20. QPSO-based adaptive DNA computing algorithm.

PubMed

Karakose, Mehmet; Cigdem, Ugur

2013-01-01

DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

1. Inviscid flux-splitting algorithms for real gases with non-equilibrium chemistry

NASA Technical Reports Server (NTRS)

Shuen, Jian-Shun; Liou, Meng-Sing; Van Leer, Bram

1990-01-01

Formulations of inviscid flux splitting algorithms for chemical nonequilibrium gases are presented. A chemical system for air dissociation and recombination is described. Numerical results for one-dimensional shock tube and nozzle flows of air in chemical nonequilibrium are examined.

2. WE-AB-BRA-08: Results of a Multi-Institutional Study for the Evaluation of Deformable Image Registration Algorithms for Structure Delineation Via Computational Phantoms

SciTech Connect

Loi, G; Fusella, M; Fiandra, C; Lanzi, E; Rosica, A; Strigari, L; Orlandini, L; Gino, E; Roggio, A; Marcocci, F; Iacovello, G; Miceli, R

2015-06-15

Purpose: To investigate the accuracy of various algorithms for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on patient images using different commercial systems. This work is part of an Italian multi-institutional study to test on common datasets the accuracy, reproducibility and safety of DIR applications in Adaptive Radiotherapy. Methods: Eleven institutions with three available commercial solutions provided data to assess the agreement of DIR-propagated ROIs with automatically drown ROIs considered as ground-truth for the comparison. The DIR algorithms were tested on real patient data from three different anatomical districts: head and neck, thorax and pelvis. For every dataset two specific Deformation Vector Fields (DVFs) provided by ImSimQA software were applied to the reference data set. Three different commercial software were used in this study: RayStation, Velocity and Mirada. The DIR-mapped ROIs were then compared with the reference ROIs using the Jaccard Conformity Index (JCI). Results: More than 600 DIR-mapped ROIs were analyzed. Putting together all JCI data of all institutions for the first DVF, the mean JCI was 0.87 ± 0.7 (1 SD) while for the second DVF JCI was 0.8 ± 0.13 (1 SD). Several considerations on different structures are available from collected data: the standard deviation among different institutions on specific structure raise as the larger is the applied DVF. The higher value is 10% for bladder. Conclusion: Although the complexity of deformation of human body is very difficult to model, this work illustrates some clinical scenarios with well-known DVFs provided by specific software. CI parameter gives the inter-user variability and may put in evidence the need of improving the working protocol in order to reduce the inter-institution JCI variability.

3. Numerical Solution of a Nonlinear Integro-Differential Equation

Buša, Ján; Hnatič, Michal; Honkonen, Juha; Lučivjanský, Tomáš

2016-02-01

A discretization algorithm for the numerical solution of a nonlinear integrodifferential equation modeling the temporal variation of the mean number density a(t) in the single-species annihilation reaction A + A → 0 is discussed. The proposed solution for the two-dimensional case (where the integral entering the equation is divergent) uses regularization and then finite differences for the approximation of the differential operator together with a piecewise linear approximation of a(t) under the integral. The presented numerical results point to basic features of the behavior of the number density function a(t) and suggest further improvement of the proposed algorithm.

4. A multi-level solution algorithm for steady-state Markov chains

NASA Technical Reports Server (NTRS)

Horton, Graham; Leutenegger, Scott T.

1993-01-01

A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

5. "Recognizing Numerical Constants"

NASA Technical Reports Server (NTRS)

Bailey, David H.; Craw, James M. (Technical Monitor)

1995-01-01

The advent of inexpensive, high performance computer and new efficient algorithms have made possible the automatic recognition of numerically computed constants. In other words, techniques now exist for determining, within certain limits, whether a computed real or complex number can be written as a simple expression involving the classical constants of mathematics. In this presentation, some of the recently discovered techniques for constant recognition, notably integer relation detection algorithms, will be presented. As an application of these methods, the author's recent work in recognizing "Euler sums" will be described in some detail.

6. Numerical approach of the quantum circuit theory

Silva, J. J. B.; Duarte-Filho, G. C.; Almeida, F. A. G.

2017-03-01

In this paper we develop a numerical method based on the quantum circuit theory to approach the coherent electronic transport in a network of quantum dots connected with arbitrary topology. The algorithm was employed in a circuit formed by quantum dots connected each other in a shape of a linear chain (associations in series), and of a ring (associations in series, and in parallel). For both systems we compute two current observables: conductance and shot noise power. We find an excellent agreement between our numerical results and the ones found in the literature. Moreover, we analyze the algorithm efficiency for a chain of quantum dots, where the mean processing time exhibits a linear dependence with the number of quantum dots in the array.

7. Numerical Simulation of a Convective Turbulence Encounter

NASA Technical Reports Server (NTRS)

Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.

2002-01-01

A numerical simulation of a convective turbulence event is investigated and compared with observational data. The numerical results show severe turbulence of similar scale and intensity to that encountered during the test flight. This turbulence is associated with buoyant plumes that penetrate the upper-level thunderstorm outflow. The simulated radar reflectivity compares well with that obtained from the aircraft's onboard radar. Resolved scales of motion as small as 50 m are needed in order to accurately diagnose aircraft normal load accelerations. Given this requirement, realistic turbulence fields may be created by merging subgrid-scales of turbulence to a convective-cloud simulation. A hazard algorithm for use with model data sets is demonstrated. The algorithm diagnoses the RMS normal loads from second moments of the vertical velocity field and is independent of aircraft motion.

8. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

2012-09-01

Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

9. Comparison of swarm intelligence algorithms in atmospheric compensation for free space optical communication

Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui

2015-03-01

We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.

10. A novel algorithm for Bluetooth ECG.

PubMed

Pandya, Utpal T; Desai, Uday B

2012-11-01

In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

11. A Combined Reconstruction Algorithm for Limited-View Multi-Element Photoacoustic Imaging

Yang, Di-Wu; Xing, Da; Zhao, Xue-Hui; Pan, Chang-Ning; Fang, Jian-Shu

2010-05-01

We present a photoacoustic imaging system with a linear transducer array scanning in limited-view fields and develop a combined reconstruction algorithm, which is a combination of the limited-field filtered back projection (LFBP) algorithm and the simultaneous iterative reconstruction technique (SIRT) algorithm, to reconstruct the optical absorption distribution. In this algorithm, the LFBP algorithm is exploited to reconstruct the original photoacoustic image, and then the SIRT algorithm is used to improve the quality of the final reconstructed photoacoustic image. Numerical simulations with calculated incomplete data validate the reliability of this algorithm and the reconstructed experimental results further demonstrate that the combined reconstruction algorithm effectively reduces the artifacts and blurs and yields better quality of reconstruction image than that with the LFBP algorithm.

12. Method for numerical simulation of two-term exponentially correlated colored noise

SciTech Connect

Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

2006-04-15

A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications.

13. First principles numerical model of avalanche-induced arc discharges in electron-irradiated dielectrics

NASA Technical Reports Server (NTRS)

Beers, B. L.; Pine, V. W.; Hwang, H. C.; Bloomberg, H. W.; Lin, D. L.; Schmidt, M. J.; Strickland, D. J.

1979-01-01

The model consists of four phases: single electron dynamics, single electron avalanche, negative streamer development, and tree formation. Numerical algorithms and computer code implementations are presented for the first three phases. An approach to developing a code description of fourth phase is discussed. Numerical results are presented for a crude material model of Teflon.

14. Numerical solution of 2D-vector tomography problem using the method of approximate inverse

Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna

2016-08-01

We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.

15. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

PubMed

Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

2014-03-01

This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

16. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

2014-03-01

This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

17. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

SciTech Connect

Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

2014-03-15

This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

18. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

PubMed Central

Wang, Jun; Zhou, Bihua; Zhou, Shudao

2016-01-01

This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

19. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

PubMed

Wang, Jun; Zhou, Bihua; Zhou, Shudao

2016-01-01

This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

20. Evaluating Applicability of Four Recursive Algorithms for Computation of the Fully Normalized Associated Legendre Functions